commutator

The many faces of Maxwell’s equations

March 5, 2018 math and physics play No comments , , , , , , , , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting (including equation numbering and references)]

The following is a possible introduction for a report for a UofT ECE2500 project associated with writing a small book: “Geometric Algebra for Electrical Engineers”. Given the space constraints for the report I may have to drop much of this, but some of the history of Maxwell’s equations may be of interest, so I thought I’d share before the knife hits the latex.

Goals of the project.

This project had a few goals

  1. Perform a literature review of applications of geometric algebra to the study of electromagnetism. Geometric algebra will be defined precisely later, along with bivector, trivector, multivector and other geometric algebra generalizations of the vector.
  2. Identify the subset of the literature that had direct relevance to electrical engineering.
  3. Create a complete, and as compact as possible, introduction of the prerequisites required
    for a graduate or advanced undergraduate electrical engineering student to be able to apply
    geometric algebra to problems in electromagnetism.

The many faces of electromagnetism.

There is a long history of attempts to find more elegant, compact and powerful ways of encoding and working with Maxwell’s equations.

Maxwell’s formulation.

Maxwell [12] employs some differential operators, including the gradient \( \spacegrad \) and Laplacian \( \spacegrad^2 \), but the divergence and gradient are always written out in full using coordinates, usually in integral form. Reading the original Treatise highlights how important notation can be, as most modern engineering or physics practitioners would find his original work incomprehensible. A nice translation from Maxwell’s notation to the modern Heaviside-Gibbs notation can be found in [16].

Quaterion representation.

In his second volume [11] the equations of electromagnetism are stated using quaterions (an extension of complex numbers to three dimensions), but quaternions are not used in the work. The modern form of Maxwell’s equations in quaternion form is
\begin{equation}\label{eqn:ece2500report:220}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BH } – \inv{2} \symmetric{ \frac{d}{dr} } { c \BD } &= c \rho + \BJ \\
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BE } + \inv{2} \symmetric{ \frac{d}{dr} }{ c \BB } &= 0,
\end{aligned}
\end{equation}
where \( \ifrac{d}{dr} = (1/c) \PDi{t}{} + \Bi \PDi{x}{} + \Bj \PDi{y}{} + \Bk \PDi{z}{} \) [7] acts bidirectionally, and vectors are expressed in terms of the quaternion basis \( \setlr{ \Bi, \Bj, \Bk } \), subject to the relations \(
\Bi^2 = \Bj^2 = \Bk^2 = -1, \quad
\Bi \Bj = \Bk = -\Bj \Bi, \quad
\Bj \Bk = \Bi = -\Bk \Bj, \quad
\Bk \Bi = \Bj = -\Bi \Bk \).
There is clearly more structure to these equations than the traditional Heaviside-Gibbs representation that we are used to, which says something for the quaternion model. However, this structure requires notation that is arguably non-intuitive. The fact that the quaterion representation was abandoned long ago by most electromagnetism researchers and engineers supports such an argument.

Minkowski tensor representation.

Minkowski introduced the concept of a complex time coordinate \( x_4 = i c t \) for special relativity [3]. Such a four-vector representation can be used for many of the relativistic four-vector pairs of electromagnetism, such as the current \((c\rho, \BJ)\), and the energy-momentum Lorentz force relations, and can also be applied to Maxwell’s equations
\begin{equation}\label{eqn:ece2500report:140}
\sum_{\mu= 1}^4 \PD{x_\mu}{F_{\mu\nu}} = – 4 \pi j_\nu.
\qquad
\sum_{\lambda\rho\mu=1}^4
\epsilon_{\mu\nu\lambda\rho}
\PD{x_\mu}{F_{\lambda\rho}} = 0,
\end{equation}
where
\begin{equation}\label{eqn:ece2500report:160}
F
=
\begin{bmatrix}
0 & B_z & -B_y & -i E_x \\
-B_z & 0 & B_x & -i E_y \\
B_y & -B_x & 0 & -i E_z \\
i E_x & i E_y & i E_z & 0
\end{bmatrix}.
\end{equation}
A rank-2 complex (Hermitian) tensor contains all six of the field components. Transformation of coordinates for this representation of the field may be performed exactly like the transformation for any other four-vector. This formalism is described nicely in [13], where the structure used is motivated by transformational requirements. One of the costs of this tensor representation is that we loose the clear separation of the electric and magnetic fields that we are so comfortable with. Another cost is that we loose the distinction between space and time, as separate space and time coordinates have to be projected out of a larger four vector. Both of these costs have theoretical benefits in some applications, particularly for high energy problems where relativity is important, but for the low velocity problems near and dear to electrical engineers who can freely treat space and time independently, the advantages are not clear.

Modern tensor formalism.

The Minkowski representation fell out of favour in theoretical physics, which settled on a real tensor representation that utilizes an explicit metric tensor \( g_{\mu\nu} = \pm \textrm{diag}(1, -1, -1, -1) \) to represent the complex inner products of special relativity. In this tensor formalism, Maxwell’s equations are also reduced to a set of two tensor relationships ([10], [8], [5]).
\begin{equation}\label{eqn:ece2500report:40}
\begin{aligned}
\partial_\mu F^{\mu \nu} &= \mu_0 J^\nu \\
\epsilon^{\alpha \beta \mu \nu} \partial_\beta F_{\mu \nu} &= 0,
\end{aligned}
\end{equation}
where \( F^{\mu\nu} \) is a \textit{real} rank-2 antisymmetric tensor that contains all six electric and magnetic field components, and \( J^\nu \) is a four-vector current containing both charge density and current density components. \Cref{eqn:ece2500report:40} provides a unified and simpler theoretical framework for electromagnetism, and is used extensively in physics but not engineering.

Differential forms.

It has been argued that a differential forms treatment of electromagnetism provides some of the same theoretical advantages as the tensor formalism, without the disadvantages of introducing a hellish mess of index manipulation into the mix. With differential forms it is also possible to express Maxwell’s equations as two equations. The free-space differential forms equivalent [4] to the tensor equations is
\begin{equation}\label{eqn:ece2500report:60}
\begin{aligned}
d \alpha &= 0 \\
d *\alpha &= 0,
\end{aligned}
\end{equation}
where
\begin{equation}\label{eqn:ece2500report:180}
\alpha = \lr{ E_1 dx^1 + E_2 dx^2 + E_3 dx^3 }(c dt) + H_1 dx^2 dx^3 + H_2 dx^3 dx^1 + H_3 dx^1 dx^2.
\end{equation}
One of the advantages of this representation is that it is valid even for curvilinear coordinate representations, which are handled naturally in differential forms. However, this formalism also comes with a number of costs. One cost (or benefit), like that of the tensor formalism, is that this is implicitly a relativistic approach subject to non-Euclidean orthonormality conditions \( (dx^i, dx^j) = \delta^{ij}, (dx^i, c dt) = 0, (c dt, c dt) = -1 \). Most grievous of the costs is the requirement to use differentials \( dx^1, dx^2, dx^3, c dt \), instead of a more familar set of basis vectors, even for non-curvilinear coordinates. This requirement is easily viewed as unnatural, and likely one of the reasons that electromagnetism with differential forms has never become popular.

Vector formalism.

Euclidean vector algebra, in particular the vector algebra and calculus of \( R^3 \), is the de-facto language of electrical engineering for electromagnetism. Maxwell’s equations in the Heaviside-Gibbs vector formalism are
\begin{equation}\label{eqn:ece2500report:20}
\begin{aligned}
\spacegrad \cross \BE &= – \PD{t}{\BB} \\
\spacegrad \cross \BH &= \BJ + \PD{t}{\BD} \\
\spacegrad \cdot \BD &= \rho \\
\spacegrad \cdot \BB &= 0.
\end{aligned}
\end{equation}
We are all intimately familiar with these equations, with the dot and the cross products, and with gradient, divergence and curl operations that are used to express them.
Given how comfortable we are with this mathematical formalism, there has to be a really good reason to switch to something else.

Space time algebra (geometric algebra).

An alternative to any of the electrodynamics formalisms described above is STA, the Space Time Algebra. STA is a relativistic geometric algebra that allows Maxwell’s equations to be combined into one equation ([2], [6])
\begin{equation}\label{eqn:ece2500report:80}
\grad F = J,
\end{equation}
where
\begin{equation}\label{eqn:ece2500report:200}
F = \BE + I c \BB \qquad (= \BE + I \eta \BH)
\end{equation}
is a bivector field containing both the electric and magnetic field “vectors”, \( \grad = \gamma^\mu \partial_\mu \) is the spacetime gradient, \( J \) is a four vector containing electric charge and current components, and \( I = \gamma_0 \gamma_1 \gamma_2 \gamma_3 \) is the spacetime pseudoscalar, the ordered product of the basis vectors \( \setlr{ \gamma_\mu } \). The STA representation is explicitly relativistic with a non-Euclidean relationships between the basis vectors \( \gamma_0 \cdot \gamma_0 = 1 = -\gamma_k \cdot \gamma_k, \forall k > 0 \). In this formalism “spatial” vectors \( \Bx = \sum_{k>0} \gamma_k \gamma_0 x^k \) are represented as spacetime bivectors, requiring a small slight of hand when switching between STA notation and conventional vector representation. Uncoincidentally \( F \) has exactly the same structure as the 2-form \(\alpha\) above, provided the differential 1-forms \( dx^\mu \) are replaced by the basis vectors \( \gamma_\mu \). However, there is a simple complex structure inherent in the STA form that is not obvious in the 2-form equivalent. The bivector representation of the field \( F \) directly encodes the antisymmetric nature of \( F^{\mu\nu} \) from the tensor formalism, and the tensor equivalents of most STA results can be calcualted easily.

Having a single PDE for all of Maxwell’s equations allows for direct Green’s function solution of the field, and has a number of other advantages. There is extensive literature exploring selected applications of STA to electrodynamics. Many theoretical results have been derived using this formalism that require significantly more complex approaches using conventional vector or tensor analysis. Unfortunately, much of the STA literature is inaccessible to the engineering student, practising engineers, or engineering instructors. To even start reading the literature, one must learn geometric algebra, aspects of special relativity and non-Euclidean geometry, generalized integration theory, and even some tensor analysis.

Paravector formalism (geometric algebra).

In the geometric algebra literature, there are a few authors who have endorsed the use of Euclidean geometric algebras for relativistic applications ([1], [14])
These authors use an Euclidean basis “vector” \( \Be_0 = 1 \) for the timelike direction, along with a standard Euclidean basis \( \setlr{ \Be_i } \) for the spatial directions. A hybrid scalar plus vector representation of four vectors, called paravectors is employed. Maxwell’s equation is written as a multivector equation
\begin{equation}\label{eqn:ece2500report:120}
\lr{ \spacegrad + \inv{c} \PD{t}{} } F = J,
\end{equation}
where \( J \) is a multivector source containing both the electric charge and currents, and \( c \) is the group velocity for the medium (assumed uniform and isometric). \( J \) may optionally include the (fictitious) magnetic charge and currents useful in antenna theory. The paravector formalism uses a the hybrid electromagnetic field representation of STA above, however, \( I = \Be_1 \Be_2 \Be_3 \) is interpreted as the \( R^3 \) pseudoscalar, the ordered product of the basis vectors \( \setlr{ \Be_i } \), and \( F \) represents a multivector with vector and bivector components. Unlike STA where \( \BE \) and \( \BB \) (or \( \BH \)) are interpretted as spacetime bivectors, here they are plain old Euclidian vectors in \( R^3 \), entirely consistent with conventional Heaviyside-Gibbs notation. Like the STA Maxwell’s equation, the paravector form is directly invertible using Green’s function techniques, without requiring the solution of equivalent second order potential problems, nor any requirement to take the derivatives of those potentials to determine the fields.

Lorentz transformation and manipulation of paravectors requires a variety of conjugation, real and imaginary operators, unlike STA where such operations have the same complex exponential structure as any 3D rotation expressed in geometric algebra. The advocates of the paravector representation argue that this provides an effective pedagogical bridge from Euclidean geometry to the Minkowski geometry of special relativity. This author agrees that this form of Maxwell’s equations is the natural choice for an introduction to electromagnetism using geometric algebra, but for relativistic operations, STA is a much more natural and less confusing choice.

Results.

The end product of this project was a fairly small self contained book, titled “Geometric Algebra for Electrical Engineers”. This book includes an introduction to Euclidean geometric algebra focused on \( R^2 \) and \( R^3 \) (64 pages), an introduction to geometric calculus and multivector Green’s functions (64 pages), and applications to electromagnetism (75 pages). This report summarizes results from this book, omitting most derivations, and attempts to provide an overview that may be used as a road map for the book for further exploration. Many of the fundamental results of electromagnetism are derived directly from the geometric algebra form of Maxwell’s equation in a streamlined and compact fashion. This includes some new results, and many of the existing non-relativistic results from the geometric algebra STA and paravector literature. It will be clear to the reader that it is often simpler to have the electric and magnetic on equal footing, and demonstrates this by deriving most results in terms of the total electromagnetic field \( F \). Many examples of how to extract the conventional electric and magnetic fields from the geometric algebra results expressed in terms of \( F \) are given as a bridge between the multivector and vector representations.

The aim of this work was to remove some of the prerequisite conceptual roadblocks that make electromagnetism using geometric algebra inaccessbile. In particular, this project explored non-relativistic applications of geometric algebra to electromagnetism. After derivation from the conventional Heaviside-Gibbs representation of Maxwell’s equations, the paravector representation of Maxwell’s equation is used as the starting point for of all subsequent analysis. However, the paravector literature includes a confusing set of conjugation and real and imaginary selection operations that are tailored for relativisitic applications. These are not neccessary for low velocity applications, and have been avoided completely with the aim of making the subject more accessibility to the engineer.

In the book an attempt has been made to avoid introducing as little new notation as possible. For example, some authors use special notation for the bivector valued magnetic field \( I \BB \), such as \( \boldsymbol{\mathcal{b}} \) or \( \Bcap \). Given the inconsistencies in the literature, \( I \BB \) (or \( I \BH \)) will be used explicitly for the bivector (magnetic) components of the total electromagnetic field \( F \). In the geometric algebra literature, there are conflicting conventions for the operator \( \spacegrad + (1/c) \PDi{t}{} \) which we will call the spacetime gradient after the STA equivalent. For examples of different notations for the spacetime gradient, see [9], [1], and [15]. In the book the spacetime gradient is always written out in full to avoid picking from or explaining some of the subtlties of the competing notations.

Some researchers will find it distasteful that STA and relativity have been avoided completely in this book. Maxwell’s equations are inherently relativistic, and STA expresses the relativistic aspects of electromagnetism in an exceptional and beautiful fashion. However, a student of this book will have learned the geometric algebra and calculus prerequisites of STA. This makes the STA literature much more accessible, especially since most of the results in the book can be trivially translated into STA notation.

References

[1] William Baylis. Electrodynamics: a modern geometric approach, volume 17. Springer Science \& Business Media, 2004.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] Albert Einstein. Relativity: The special and the general theory, chapter Minkowski’s Four-Dimensional Space. Princeton University Press, 2015. URL http://www.gutenberg.org/ebooks/5001.

[4] H. Flanders. Differential Forms With Applications to the Physical Sciences. Courier Dover Publications, 1989.

[5] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.

[6] David Hestenes. Space-time algebra, volume 1. Springer, 1966.

[7] Peter Michael Jack. Physical space as a quaternion structure, i: Maxwell equations. a brief note. arXiv preprint math-ph/0307038, 2003. URL https://arxiv.org/abs/math-ph/0307038.

[8] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

[9] Bernard Jancewicz. Multivectors and Clifford algebra in electrodynamics. World Scientific, 1988.

[10] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980. ISBN 0750627689.

[11] James Clerk Maxwell. A treatise on electricity and magnetism, volume II. Merchant Books, 1881.

[12] James Clerk Maxwell. A treatise on electricity and magnetism, third edition, volume I. Dover publications, 1891.

[13] M. Schwartz. Principles of Electrodynamics. Dover Publications, 1987.

[14] Chappell et al. A simplified approach to electromagnetism using geometric algebra. arXiv preprint arXiv:1010.4947, 2010.

[15] Chappell et al. Geometric algebra for electrical and electronic engineers. 2014.

[16] Chappell et al. Geometric Algebra for Electrical and Electronic Engineers, 2014

A derivation of the quaternion Maxwell’s equations using geometric algebra.

March 5, 2018 math and physics play No comments , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Motivation.

The quaternion form of Maxwell’s equations as stated in [2] is nearly indecipherable. The modern quaternionic form of these equations can be found in [1]. Looking for this representation was driven by the question of whether or not the compact geometric algebra representations of Maxwell’s equations \( \grad F = J \), was possible using a quaternion representation of the fields.

As quaternions may be viewed as the even subalgebra of GA(3,0), it is possible to the quaternion representation of Maxwell’s equations using only geometric algebra, including source terms and independent of the heat considerations discussed in [1]. Such a derivation will be performed here. Examination of the results appears to answer the question about the compact representation in the negative.

Quaternions as multivectors.

Quaternions are vector plus scalar sums, where the vector basis \( \setlr{ \Bi, \Bj, \Bk } \) are subject to the complex like multiplication rules
\begin{equation}\label{eqn:complex:240}
\begin{aligned}
\Bi^2 &= \Bj^2 = \Bk^2 = -1 \\
\Bi \Bj &= \Bk = -\Bj \Bi \\
\Bj \Bk &= \Bi = -\Bk \Bj \\
\Bk \Bi &= \Bj = -\Bi \Bk.
\end{aligned}
\end{equation}

We can represent these basis vectors in terms of the \R{3} unit bivectors
\begin{equation}\label{eqn:quaternion2maxwellWithGA:260}
\begin{aligned}
\Bi &= \Be_{3} \Be_{2} = -I \Be_1 \\
\Bj &= \Be_{1} \Be_{3} = -I \Be_2 \\
\Bk &= \Be_{2} \Be_{1} = -I \Be_3,
\end{aligned}
\end{equation}
where \( I = \Be_1 \Be_2 \Be_3 \) is the ordered product of the \R{3} basis elements. Within geometric algebra, the quaternion basis “vectors” are more properly viewed as a bivector space basis that happens to have dimension three.

Following [1], we may introduce a quaternionic spacetime gradient, and express that in terms of geometric algebra
\begin{equation}\label{eqn:quaternion2maxwellWithGA:280}
\frac{d}{dr} = \inv{c} \PD{t}{}
+ \Bi \PD{x}{}
+ \Bj \PD{y}{}
+ \Bk \PD{z}{}
=
\inv{c}\PD{t}{} -I \spacegrad.
\end{equation}

Of particular interest is how do we write the curl, divergence and time partials in terms of the quaternionic spacetime gradient or its components. Like [1], we will use modern commutator notation for an antisymmetric difference of products
\begin{equation}\label{eqn:quaternion2maxwellWithGA:600}
\antisymmetric{a}{b} = a b – b a,
\end{equation}
and anticommutator notation for a symmetric difference of products
\begin{equation}\label{eqn:quaternion2maxwellWithGA:620}
\symmetric{a}{b} = a b + b a.
\end{equation}
The curl of a vector \( \Bf \) in terms of vector products with the gradient is
\begin{equation}\label{eqn:quaternion2maxwellWithGA:300}
\begin{aligned}
\spacegrad \cross \Bf
&= -I(\spacegrad \wedge \Bf) \\
&= -\frac{I}{2} \lr{ \spacegrad \Bf – \Bf \spacegrad } \\
&= \frac{1}{2} \lr{ (-I \spacegrad) \Bf – \Bf (-I\spacegrad) } \\
&= \inv{2} \antisymmetric{ -I \spacegrad }{ \Bf } \\
&= \inv{2} \antisymmetric{ \frac{d}{dr} }{ \Bf },
\end{aligned}
\end{equation}
where the last step takes advantage of the fact that the timelike contribution of the spacetime gradient commutes with any vector \( \Bf \) due to its scalar nature, so cancels out of the commutator. In a similar fashion, the dot product may be written as an anticommutator
\begin{equation}\label{eqn:quaternion2maxwellWithGA:480}
\spacegrad \cdot \Bf
=
\inv{2} \lr{ \spacegrad \Bf + \Bf \spacegrad }
=
\inv{2} \symmetric{ \spacegrad}{ \Bf },
\end{equation}
as can the scalar time derivative
\begin{equation}\label{eqn:quaternion2maxwellWithGA:500}
\PD{t}{\Bf}
= \inv{2} \symmetric{ \inv{c} \PD{t}{} } { c \Bf }.
\end{equation}

Quaternionic form of Maxwell’s equations.

Using geometric algebra as an intermediate transformation, let’s see directly how to express Maxwell’s equations in terms of this quaternionic operator. Our starting point is Maxwell’s equations in their standard macroscopic form

\begin{equation}\label{eqn:ece2500report:20}
\spacegrad \cross \BH = \BJ + \PD{t}{\BD}
\end{equation}
\begin{equation}\label{eqn:quaternion2maxwellWithGA:340}
\spacegrad \cdot \BD = \rho
\end{equation}
\begin{equation}\label{eqn:quaternion2maxwellWithGA:360}
\spacegrad \cross \BE = – \PD{t}{\BB}
\end{equation}
\begin{equation}\label{eqn:quaternion2maxwellWithGA:380}
\spacegrad \cdot \BB = 0.
\end{equation}

Inserting these into Maxwell-Faraday and into Gauss’s law for magnetism we have
\begin{equation}\label{eqn:quaternion2maxwellWithGA:400}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BE } &= – \symmetric{ \inv{c}\PD{t}{} }{ c \BB } \\
\inv{2} \symmetric{ \spacegrad }{ c \BB } &= 0,
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:quaternion2maxwellWithGA:420}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ -I \BE } + \symmetric{ \inv{c}\PD{t}{} }{ -I c \BB } &= 0 \\
\inv{2} \symmetric{ -I \spacegrad }{ -I c \BB } &= 0
\end{aligned}
\end{equation}
We can introduce quaternionic electric and magnetic field “vectors” (really bivectors)
\begin{equation}\label{eqn:quaternion2maxwellWithGA:440}
\begin{aligned}
\boldsymbol{\mathcal{E}} &= -I \BE = \Bi E_x + \Bj E_y + \Bk E_z \\
\boldsymbol{\mathcal{B}} &= -I \BB = \Bi B_x + \Bj B_y + \Bk B_z,
\end{aligned}
\end{equation}
and substitute these and sum to find the quaternionic representation of the two source free Maxwell’s equations
\begin{equation}\label{eqn:quaternion2maxwellWithGA:460}
\boxed{
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \boldsymbol{\mathcal{E}} } + \inv{2} \symmetric{ \frac{d}{dr} }{ c \boldsymbol{\mathcal{B}} } = 0.
}
\end{equation}

Inserting the quaternion curl, div and time derivative representations into Ampere-Maxwell’s law and Gauss’s law, gives
\begin{equation}\label{eqn:quaternion2maxwellWithGA:520}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BH } &= \BJ + \inv{2} \symmetric{ \inv{c} \PD{t}{} } { c \BD } \\
\inv{2} \symmetric{ \spacegrad }{ c \BD } &= c \rho,
\end{aligned}
\end{equation}
\begin{equation}\label{eqn:quaternion2maxwellWithGA:540}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ -I \BH } – \inv{2} \symmetric{ \inv{c} \PD{t}{} } { -I c \BD } &= -I \BJ \\
-\inv{2} \symmetric{ -I \spacegrad }{ -I c \BD } &= c \rho.
\end{aligned}
\end{equation}
With quaternionic displacement vector and magnetization, and current densities
\begin{equation}\label{eqn:quaternion2maxwellWithGA:580}
\begin{aligned}
\boldsymbol{\mathcal{D}} &= -I \BD = \Bi D_x + \Bj D_y + \Bk D_z \\
\boldsymbol{\mathcal{H}} &= -I \BH = \Bi H_x + \Bj H_y + \Bk H_z \\
\boldsymbol{\mathcal{J}} &= -I \BJ = \Bi J_x + \Bj J_y + \Bk J_z,
\end{aligned}
\end{equation}
and summing yields the two remaining two Maxwell equations in their quaternionic form
\begin{equation}\label{eqn:quaternion2maxwellWithGA:560}
\boxed{
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \boldsymbol{\mathcal{H}} } – \inv{2} \symmetric{ \frac{d}{dr} } { c \boldsymbol{\mathcal{D}} } = c \rho + \boldsymbol{\mathcal{J}}.
}
\end{equation}

Conclusions.

Maxwell’s equations in the quaternion representation have a structure that is not apparent in the Heaviside-Gibbs notation. There is some elegance to this result, but comes with the cost of having to use commutator and anticommutator operators, which are arguably non-intuitive. The compact geometric algebra representation of Maxwell’s equation does not appear possible with a quaternion representation, as an additional complex degree of freedom would be required (biquaternions?) Such a degree of freedom may also allow a quaternion representation of the (fictitious) magnetic sources that are useful in antenna theory with a quaternion model. Magnetic sources are easily incorporated into the current multivector in geometric algebra, but if done so in the derivation above, yield an odd grade multivector source which has no quaternion representation.

References

[1] Peter Michael Jack. Physical space as a quaternion structure, i: Maxwell equations. a brief note. arXiv preprint math-ph/0307038, 2003. URL https://arxiv.org/abs/math-ph/0307038.

[2] James Clerk Maxwell. A treatise on electricity and magnetism, volume II. Merchant Books, 1881.

Commutators for some symmetry operators

December 16, 2015 phy1520 No comments , , ,

[Click here for a PDF of this post with nicer formatting]

Q: [1] pr 4.2

If \( \mathcal{T}_\Bd \), \( \mathcal{D}(\ncap, \phi) \), and \( \pi \) denote the translation, rotation, and parity operators respectively. Which of the following commute and why

  • (a) \( \mathcal{T}_\Bd \) and \( \mathcal{T}_{\Bd’} \), translations in different directions.
  • (b) \( \mathcal{D}(\ncap, \phi) \) and \( \mathcal{D}(\ncap’, \phi’) \), rotations in different directions.
  • (c) \( \mathcal{T}_\Bd \) and \( \pi \).
  • (d) \( \mathcal{D}(\ncap,\phi)\) and \( \pi \).

A: (a)

Consider
\begin{equation}\label{eqn:symmetryOperatorCommutators:20}
\begin{aligned}
\mathcal{T}_\Bd \mathcal{T}_{\Bd’} \ket{\Bx}
&=
\mathcal{T}_\Bd \ket{\Bx + \Bd’} \\
&=
\ket{\Bx + \Bd’ + \Bd},
\end{aligned}
\end{equation}

and the reverse application of the translation operators
\begin{equation}\label{eqn:symmetryOperatorCommutators:40}
\begin{aligned}
\mathcal{T}_{\Bd’} \mathcal{T}_{\Bd} \ket{\Bx}
&=
\mathcal{T}_{\Bd’} \ket{\Bx + \Bd} \\
&=
\ket{\Bx + \Bd + \Bd’} \\
&=
\ket{\Bx + \Bd’ + \Bd}.
\end{aligned}
\end{equation}

so we see that

\begin{equation}\label{eqn:symmetryOperatorCommutators:60}
\antisymmetric{\mathcal{T}_\Bd}{\mathcal{T}_{\Bd’}} \ket{\Bx} = 0,
\end{equation}

for any position state \( \ket{\Bx} \), and therefore in general they commute.

A: (b)

That rotations do not commute when they are in different directions (like any two orthogonal directions) need not be belaboured.

A: (c)

We have
\begin{equation}\label{eqn:symmetryOperatorCommutators:80}
\begin{aligned}
\mathcal{T}_\Bd \pi \ket{\Bx}
&=
\mathcal{T}_\Bd \ket{-\Bx} \\
&=
\ket{-\Bx + \Bd},
\end{aligned}
\end{equation}

yet
\begin{equation}\label{eqn:symmetryOperatorCommutators:100}
\begin{aligned}
\pi \mathcal{T}_\Bd \ket{\Bx}
&=
\pi \ket{\Bx + \Bd} \\
&=
\ket{-\Bx – \Bd} \\
&\ne
\ket{-\Bx + \Bd}.
\end{aligned}
\end{equation}

so, in general \( \antisymmetric{\mathcal{T}_\Bd}{\pi} \ne 0 \).

A: (d)

We have

\begin{equation}\label{eqn:symmetryOperatorCommutators:120}
\begin{aligned}
\pi \mathcal{D}(\ncap, \phi) \ket{\Bx}
&=
\pi \mathcal{D}(\ncap, \phi) \pi^\dagger \pi \ket{\Bx} \\
&=
\pi \mathcal{D}(\ncap, \phi) \pi^\dagger \pi \ket{\Bx} \\
&=
\pi \lr{ \sum_{k=0}^\infty \frac{(-i \BJ \cdot \ncap)^k}{k!} } \pi^\dagger \pi \ket{\Bx} \\
&=
\sum_{k=0}^\infty \frac{(-i (\pi \BJ \pi^\dagger) \cdot (\pi \ncap \pi^\dagger) )^k}{k!} \pi \ket{\Bx} \\
&=
\sum_{k=0}^\infty \frac{(-i \BJ \cdot \ncap)^k}{k!} \pi \ket{\Bx} \\
&=
\mathcal{D}(\ncap, \phi) \pi \ket{\Bx},
\end{aligned}
\end{equation}

so \( \antisymmetric{\mathcal{D}(\ncap, \phi)}{\pi} \ket{\Bx} = 0 \), for any position state \( \ket{\Bx} \), and therefore these operators commute in general.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

2D SHO xy perturbation

December 7, 2015 phy1520 No comments , , , , ,

[Click here for a PDF of this post with nicer formatting]

Q: [1] pr. 5.4

Given a 2D SHO with Hamiltonian

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:20}
H_0 = \inv{2m} \lr{ p_x^2 + p_y^2 } + \frac{m \omega^2}{2} \lr{ x^2 + y^2 },
\end{equation}

  • (a)
    What are the energies and degeneracies of the three lowest states?

  • (b)
    With perturbation

    \begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:40}
    V = m \omega^2 x y,
    \end{equation}

    calculate the first order energy perturbations and the zeroth order perturbed states.

  • (c)
    Solve the \( H_0 + \delta V \) problem exactly, and compare.

A: part (a)

Recall that we have

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:60}
H \ket{n_1, n_2} =
\Hbar\omega
\lr{
n_1 + n_2 + 1
}
\ket{n_1, n_2},
\end{equation}

So the three lowest energy states are \( \ket{0,0}, \ket{1,0}, \ket{0,1} \) with energies \( \Hbar \omega, 2 \Hbar \omega, 2 \Hbar \omega \) respectively (with a two fold degeneracy for the second two energy eigenkets).

A: part (b)

Consider the action of \( x y \) on the \( \beta = \setlr{ \ket{0,0}, \ket{1,0}, \ket{0,1} } \) subspace. Those are

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:200}
\begin{aligned}
x y \ket{0,0}
&=
\frac{x_0^2}{2} \lr{ a + a^\dagger } \lr{ b + b^\dagger } \ket{0,0} \\
&=
\frac{x_0^2}{2} \lr{ b + b^\dagger } \ket{1,0} \\
&=
\frac{x_0^2}{2} \ket{1,1}.
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:220}
\begin{aligned}
x y \ket{1, 0}
&=
\frac{x_0^2}{2} \lr{ a + a^\dagger } \lr{ b + b^\dagger } \ket{1,0} \\
&=
\frac{x_0^2}{2} \lr{ a + a^\dagger } \ket{1,1} \\
&=
\frac{x_0^2}{2} \lr{ \ket{0,1} + \sqrt{2} \ket{2,1} } .
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:240}
\begin{aligned}
x y \ket{0, 1}
&=
\frac{x_0^2}{2} \lr{ a + a^\dagger } \lr{ b + b^\dagger } \ket{0,1} \\
&=
\frac{x_0^2}{2} \lr{ b + b^\dagger } \ket{1,1} \\
&=
\frac{x_0^2}{2} \lr{ \ket{1,0} + \sqrt{2} \ket{1,2} }.
\end{aligned}
\end{equation}

The matrix representation of \( m \omega^2 x y \) with respect to the subspace spanned by basis \( \beta \) above is

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:260}
x y
\sim
\inv{2} \Hbar \omega
\begin{bmatrix}
0 & 0 & 0 \\
0 & 0 & 1 \\
0 & 1 & 0 \\
\end{bmatrix}.
\end{equation}

This diagonalizes with

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:300}
U
=
\begin{bmatrix}
1 & 0 \\
0 & \tilde{U}
\end{bmatrix}
\end{equation}
\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:320}
\tilde{U}
=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
1 & -1 \\
\end{bmatrix}
\end{equation}
\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:340}
D =
\inv{2} \Hbar \omega
\begin{bmatrix}
0 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & -1 \\
\end{bmatrix}
\end{equation}
\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:360}
x y = U D U^\dagger = U D U.
\end{equation}

The unperturbed Hamiltonian in the original basis is

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:380}
H_0
=
\Hbar \omega
\begin{bmatrix}
1 & 0 \\
0 & 2 I
\end{bmatrix},
\end{equation}

So the transformation to the diagonal \( x y \) basis leaves the initial Hamiltonian unaltered

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:400}
\begin{aligned}
H_0′
&= U^\dagger H_0 U \\
&=
\Hbar \omega
\begin{bmatrix}
1 & 0 \\
0 & \tilde{U} 2 I \tilde{U}
\end{bmatrix} \\
&=
\Hbar \omega
\begin{bmatrix}
1 & 0 \\
0 & 2 I
\end{bmatrix}.
\end{aligned}
\end{equation}

Now we can compute the first order energy shifts almost by inspection. Writing the new basis as \( \beta’ = \setlr{ \ket{0}, \ket{1}, \ket{2} } \) those energy shifts are just the diagonal elements from the \( x y \) operators matrix representation

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:420}
\begin{aligned}
E^{{(1)}}_0 &= \bra{0} V \ket{0} = 0 \\
E^{{(1)}}_1 &= \bra{1} V \ket{1} = \inv{2} \Hbar \omega \\
E^{{(1)}}_2 &= \bra{2} V \ket{2} = -\inv{2} \Hbar \omega.
\end{aligned}
\end{equation}

The new energies are

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:440}
\begin{aligned}
E_0 &\rightarrow \Hbar \omega \\
E_1 &\rightarrow \Hbar \omega \lr{ 2 + \delta/2 } \\
E_2 &\rightarrow \Hbar \omega \lr{ 2 – \delta/2 }.
\end{aligned}
\end{equation}

A: part (c)

For the exact solution, it’s possible to rotate the coordinate system in a way that kills the explicit \( x y \) term of the perturbation. That we could do this for \( x, y \) operators wasn’t obvious to me, but after doing so (and rotating the momentum operators the same way) the new operators still have the required commutators. Let

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:80}
\begin{aligned}
\begin{bmatrix}
u \\
v
\end{bmatrix}
&=
\begin{bmatrix}
\cos\theta & \sin\theta \\
-\sin\theta & \cos\theta
\end{bmatrix}
\begin{bmatrix}
x \\
y
\end{bmatrix} \\
&=
\begin{bmatrix}
x \cos\theta + y \sin\theta \\
-x \sin\theta + y \cos\theta
\end{bmatrix}.
\end{aligned}
\end{equation}

Similarly, for the momentum operators, let
\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:100}
\begin{aligned}
\begin{bmatrix}
p_u \\
p_v
\end{bmatrix}
&=
\begin{bmatrix}
\cos\theta & \sin\theta \\
-\sin\theta & \cos\theta
\end{bmatrix}
\begin{bmatrix}
p_x \\
p_y
\end{bmatrix} \\
&=
\begin{bmatrix}
p_x \cos\theta + p_y \sin\theta \\
-p_x \sin\theta + p_y \cos\theta
\end{bmatrix}.
\end{aligned}
\end{equation}

For the commutators of the new operators we have

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:120}
\begin{aligned}
\antisymmetric{u}{p_u}
&=
\antisymmetric{x \cos\theta + y \sin\theta}{p_x \cos\theta + p_y \sin\theta} \\
&=
\antisymmetric{x}{p_x} \cos^2\theta + \antisymmetric{y}{p_y} \sin^2\theta \\
&=
i \Hbar \lr{ \cos^2\theta + \sin^2\theta } \\
&=
i\Hbar.
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:140}
\begin{aligned}
\antisymmetric{v}{p_v}
&=
\antisymmetric{-x \sin\theta + y \cos\theta}{-p_x \sin\theta + p_y \cos\theta} \\
&=
\antisymmetric{x}{p_x} \sin^2\theta + \antisymmetric{y}{p_y} \cos^2\theta \\
&=
i \Hbar.
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:160}
\begin{aligned}
\antisymmetric{u}{p_v}
&=
\antisymmetric{x \cos\theta + y \sin\theta}{-p_x \sin\theta + p_y \cos\theta} \\
&= \cos\theta \sin\theta \lr{ -\antisymmetric{x}{p_x} + \antisymmetric{y}{p_p} } \\
&=
0.
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:180}
\begin{aligned}
\antisymmetric{v}{p_u}
&=
\antisymmetric{-x \sin\theta + y \cos\theta}{p_x \cos\theta + p_y \sin\theta} \\
&= \cos\theta \sin\theta \lr{ -\antisymmetric{x}{p_x} + \antisymmetric{y}{p_p} } \\
&=
0.
\end{aligned}
\end{equation}

We see that the new operators are canonical conjugate as required. For this problem, we just want a 45 degree rotation, with

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:460}
\begin{aligned}
x &= \inv{\sqrt{2}} \lr{ u + v } \\
y &= \inv{\sqrt{2}} \lr{ u – v }.
\end{aligned}
\end{equation}

We have
\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:480}
\begin{aligned}
x^2 + y^2
&=
\inv{2} \lr{ (u+v)^2 + (u-v)^2 } \\
&=
\inv{2} \lr{ 2 u^2 + 2 v^2 + 2 u v – 2 u v } \\
&=
u^2 + v^2,
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:500}
\begin{aligned}
p_x^2 + p_y^2
&=
\inv{2} \lr{ (p_u+p_v)^2 + (p_u-p_v)^2 } \\
&=
\inv{2} \lr{ 2 p_u^2 + 2 p_v^2 + 2 p_u p_v – 2 p_u p_v } \\
&=
p_u^2 + p_v^2,
\end{aligned}
\end{equation}

and
\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:520}
\begin{aligned}
x y
&=
\inv{2} \lr{ (u+v)(u-v) } \\
&=
\inv{2} \lr{ u^2 – v^2 }.
\end{aligned}
\end{equation}

The perturbed Hamiltonian is

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:540}
\begin{aligned}
H_0 + \delta V
&=
\inv{2m} \lr{ p_u^2 + p_v^2 }
+ \inv{2} m \omega^2 \lr{ u^2 + v^2 + \delta u^2 – \delta v^2 } \\
&=
\inv{2m} \lr{ p_u^2 + p_v^2 }
+ \inv{2} m \omega^2 \lr{ u^2(1 + \delta) + v^2 (1 – \delta) }.
\end{aligned}
\end{equation}

In this coordinate system, the corresponding eigensystem is

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:560}
H \ket{n_1, n_2}
= \Hbar \omega \lr{ 1 + n_1 \sqrt{1 + \delta} + n_2 \sqrt{ 1 – \delta } } \ket{n_1, n_2}.
\end{equation}

For small \( \delta \)

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:580}
n_1 \sqrt{1 + \delta} + n_2 \sqrt{ 1 – \delta }
\approx
n_1 + n_2
+ \inv{2} n_1 \delta
– \inv{2} n_2 \delta,
\end{equation}

so
\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:600}
H \ket{n_1, n_2}
\approx \Hbar \omega \lr{ 1 + n_1 + n_2 + \inv{2} n_1 \delta – \inv{2} n_2 \delta
} \ket{n_1, n_2}.
\end{equation}

The lowest order perturbed energy levels are

\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:620}
\ket{0,0} \rightarrow \Hbar \omega
\end{equation}
\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:640}
\ket{1,0} \rightarrow \Hbar \omega \lr{ 2 + \inv{2} \delta }
\end{equation}
\begin{equation}\label{eqn:2dHarmonicOscillatorXYPerturbation:660}
\ket{0,1} \rightarrow \Hbar \omega \lr{ 2 – \inv{2} \delta }
\end{equation}

The degeneracy of the \( \ket{0,1}, \ket{1,0} \) states has been split, and to first order match the zeroth order perturbation result.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

PHY1520H Graduate Quantum Mechanics. Lecture 16: Addition of angular momenta. Taught by Prof. Arun Paramekanti

November 17, 2015 phy1520 No comments , , ,

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

These are notes for the UofT course PHY1520, Graduate Quantum Mechanics, taught by Prof. Paramekanti, covering [1] chap. 3 content.

Addition of angular momenta

  • For orbital angular momentum

    \begin{equation}\label{eqn:qmLecture16:20}
    \begin{aligned}
    \hat{\BL}_1 &= \hat{\Br}_1 \cross \hat{\Bp}_1 \\
    \hat{\BL}_1 &= \hat{\Br}_1 \cross \hat{\Bp}_1,
    \end{aligned}
    \end{equation}

    We can show that it is true that

    \begin{equation}\label{eqn:qmLecture16:40}
    \antisymmetric{L_{1i} + L_{2i}}{L_{1j} + L_{2j}} =
    i \Hbar \epsilon_{i j k} \lr{ L_{1k} + L_{2k} },
    \end{equation}

    because the angular momentum of the independent particles commute. Given this is it fair to consider that the sum

    \begin{equation}\label{eqn:qmLecture16:60}
    \hat{\BL}_1 + \hat{\BL}_2
    \end{equation}

    is also angular momentum.

  • Given \( \ket{l_1, m_1} \) and \( \ket{l_2, m_2} \), if a measurement is made of \( \hat{\BL}_1 + \hat{\BL}_2 \), what do we get?

    Specifically, what do we get for

    \begin{equation}\label{eqn:qmLecture16:80}
    \lr{\hat{\BL}_1 + \hat{\BL}_2}^2,
    \end{equation}

    and for
    \begin{equation}\label{eqn:qmLecture16:100}
    \lr{\hat{L}_{1z} + \hat{L}_{2z}}.
    \end{equation}

    For the latter, we get

    \begin{equation}\label{eqn:qmLecture16:120}
    \lr{\hat{L}_{1z} + \hat{L}_{2z}}\ket{ l_1, m_1 ; l_2, m_2 }
    =
    \lr{ \Hbar m_1 + \Hbar m_2 } \ket{ l_1, m_1 ; l_2, m_2 }
    \end{equation}

    Given
    \begin{equation}\label{eqn:qmLecture16:140}
    \hat{L}_{1z} + \hat{L}_{2z} = \hat{L}_z^{\textrm{tot}},
    \end{equation}

    we find
    \begin{equation}\label{eqn:qmLecture16:160}
    \begin{aligned}
    \antisymmetric{\hat{L}_z^{\textrm{tot}}}{\hat{\BL}_1^2} &= 0 \\
    \antisymmetric{\hat{L}_z^{\textrm{tot}}}{\hat{\BL}_2^2} &= 0 \\
    \antisymmetric{\hat{L}_z^{\textrm{tot}}}{\hat{\BL}_{1z}} &= 0 \\
    \antisymmetric{\hat{L}_z^{\textrm{tot}}}{\hat{\BL}_{1z}} &= 0.
    \end{aligned}
    \end{equation}

    We also find

    \begin{equation}\label{eqn:qmLecture16:180}
    \begin{aligned}
    \antisymmetric{(\hat{\BL}_1 + \hat{\BL}_2)^2}{\BL_1^2}
    &=
    \antisymmetric{\hat{\BL}_1^2 + \hat{\BL}_2^2 + 2 \hat{\BL}_1 \cdot
    \hat{\BL}_2}{\BL_1^2} \\
    &=
    0,
    \end{aligned}
    \end{equation}

    but for
    \begin{equation}\label{eqn:qmLecture16:200}
    \begin{aligned}
    \antisymmetric{(\hat{\BL}_1 + \hat{\BL}_2)^2}{\hat{L}_{1z}}
    &=
    \antisymmetric{\hat{\BL}_1^2 + \hat{\BL}_2^2 + 2 \hat{\BL}_1 \cdot
    \hat{\BL}_2}{\hat{L}_{1z}} \\
    &=
    2 \antisymmetric{\hat{\BL}_1 \cdot \hat{\BL}_2}{\hat{L}_{1z}} \\
    &\ne 0.
    \end{aligned}
    \end{equation}

Classically if we have measured \( \hat{\BL}_{1} \) and \( \hat{\BL}_{2} \) then we know the total angular momentum as sketched in fig. 1.

fig. 1.  Classical addition of angular momenta.

fig. 1. Classical addition of angular momenta.

In QM where we don’t know all the components of the angular momenum simultaneously, things get fuzzier. For example, if the \( \hat{L}_{1z} \) and \( \hat{L}_{2z} \) components have been measured, we have the angular momentum defined within a conical region as sketched in fig. 2.

fig. 2.  Addition of angular momenta given measured L_z

fig. 2. Addition of angular momenta given measured L_z

Suppose we know \( \hat{L}_z^{\textrm{tot}} \) precisely, but have impricise information about \( \lr{\hat{\BL}^{\textrm{tot}}}^2 \). Can we determine bounds for this? Let \( \ket{\psi} = \ket{ l_1, m_2 ; l_2, m_2 } \), so

\begin{equation}\label{eqn:qmLecture16:220}
\begin{aligned}
\bra{\psi} \lr{ \hat{\BL}_1 + \hat{\BL}_2 }^2 \ket{\psi}
&=
\bra{\psi} \hat{\BL}_1^2 \ket{\psi}
+ \bra{\psi} \hat{\BL}_2^2 \ket{\psi}
+ 2 \bra{\psi} \hat{\BL}_1 \cdot \hat{\BL}_2 \ket{\psi} \\
&=
l_1 \lr{ l_1 + 1} \Hbar^2
+ l_2 \lr{ l_2 + 1} \Hbar^2
+ 2
\bra{\psi} \hat{\BL}_1 \cdot \hat{\BL}_2 \ket{\psi}.
\end{aligned}
\end{equation}

Using the Cauchy-Schwartz inequality

\begin{equation}\label{eqn:qmLecture16:240}
\Abs{\braket{\phi}{\psi}}^2 \le
\Abs{\braket{\phi}{\phi}}
\Abs{\braket{\psi}{\psi}},
\end{equation}

which is the equivalent of the classical relationship
\begin{equation}\label{eqn:qmLecture16:260}
\lr{ \BA \cdot \BB }^2 \le \BA^2 \BB^2.
\end{equation}

Applying this to the last term, we have

\begin{equation}\label{eqn:qmLecture16:280}
\begin{aligned}
\lr{ \bra{\psi} \hat{\BL}_1 \cdot \hat{\BL}_2 \ket{\psi} }^2
&\le
\bra{ \psi} \hat{\BL}_1 \cdot \hat{\BL}_1 \ket{\psi}
\bra{ \psi} \hat{\BL}_2 \cdot \hat{\BL}_2 \ket{\psi} \\
&=
\Hbar^4
l_1 \lr{ l_1 + 1 }
l_2 \lr{ l_2 + 2 }.
\end{aligned}
\end{equation}

Thus for the max we have

\begin{equation}\label{eqn:qmLecture16:300}
\bra{\psi} \lr{ \hat{\BL}_1 + \hat{\BL}_2 }^2 \ket{\psi}
\le
\Hbar^2 l_1 \lr{ l_1 + 1 }
+\Hbar^2 l_2 \lr{ l_2 + 1 }
+ 2 \Hbar^2 \sqrt{ l_1 \lr{ l_1 + 1 } l_2 \lr{ l_2 + 2 } }
\end{equation}

and for the min
\begin{equation}\label{eqn:qmLecture16:360}
\bra{\psi} \lr{ \hat{\BL}_1 + \hat{\BL}_2 }^2 \ket{\psi}
\ge
\Hbar^2 l_1 \lr{ l_1 + 1 }
+\Hbar^2 l_2 \lr{ l_2 + 1 }
– 2 \Hbar^2 \sqrt{ l_1 \lr{ l_1 + 1 } l_2 \lr{ l_2 + 2 } }.
\end{equation}

To try to pretty up these estimate, starting with the max, note that if we replace a portion of the RHS with something bigger, we are left with a strict less than relationship.

That is

\begin{equation}\label{eqn:qmLecture16:320}
\begin{aligned}
l_1 \lr{ l_1 + 1 } &< \lr{ l_1 + \inv{2} }^2 \\ l_2 \lr{ l_2 + 1 } &< \lr{ l_2 + \inv{2} }^2 \end{aligned} \end{equation} That is \begin{equation}\label{eqn:qmLecture16:340} \begin{aligned} \bra{\psi} \lr{ \hat{\BL}_1 + \hat{\BL}_2 }^2 \ket{\psi} &< \Hbar^2 \lr{ l_1 \lr{ l_1 + 1 } + l_2 \lr{ l_2 + 1 } + 2 \lr{ l_1 + \inv{2} } \lr{ l_2 + \inv{2} } } \\ &= \Hbar^2 \lr{ l_1^2 + l_2^2 + l_1 + l_2 + 2 l_1 l_2 + l_1 + l_2 + \inv{2} } \\ &= \Hbar^2 \lr{ \lr{ l_1 + l_2 + \inv{2} } \lr{ l_1 + l_2 + \frac{3}{2} } - \inv{4} } \end{aligned} \end{equation} or \begin{equation}\label{eqn:qmLecture16:380} l_{\textrm{tot}} \lr{ l_{\textrm{tot}} + 1 } < \lr{ l_1 + l_2 + \inv{2} } \lr{ l_1 + l_2 + \frac{3}{2} } , \end{equation} which, gives \begin{equation}\label{eqn:qmLecture16:400} l_{\textrm{tot}} < l_1 + l_2 + \inv{2}. \end{equation} Finally, given a quantization requirement, that is \begin{equation}\label{eqn:t:1} \boxed{ l_{\textrm{tot}} \le l_1 + l_2. } \end{equation} Similarly, for the min, we find \begin{equation}\label{eqn:qmLecture16:440} \begin{aligned} \bra{\psi} \lr{ \hat{\BL}_1 + \hat{\BL}_2 }^2 \ket{\psi} &>
\Hbar^2
\lr{
l_1 \lr{ l_1 + 1 }
+ l_2 \lr{ l_2 + 1 }
– 2 \lr{ l_1 + \inv{2} } \lr{ l_2 + \inv{2} }
} \\
&=
\Hbar^2
\lr{
l_1^2 + l_2^2 %+ \cancel{l_1 + l_2}
– 2 l_1 l_2
– \inv{2}
}
\end{aligned}
\end{equation}

This will be finished Thursday, but we should get

\begin{equation}\label{eqn:t:2}
\boxed{
\Abs{l_1 – l_2} \le l_{\textrm{tot}} \le l_1 + l_2.
}
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Third update of aggregate notes for phy1520, Graduate Quantum Mechanics.

November 9, 2015 phy1520 1 comment , , , , , , , , , , , , , , , ,

I’ve posted a third update of my aggregate notes for PHY1520H Graduate Quantum Mechanics, taught by Prof. Arun Paramekanti. In addition to what was noted previously, this contains lecture notes up to lecture 13, my solutions for the third problem set, and some additional worked practice problems.

Most of the content was posted individually in the following locations, but those original documents will not be maintained individually any further.

Position operator in momentum space representation

November 8, 2015 phy1520 No comments , ,

[Click here for a PDF of this post with nicer formatting]

A derivation of the position space representation of the momentum operator \( -i \Hbar \partial_x \) is made in [1], starting with the position-momentum commutator. Here I’ll repeat that argument for the momentum space representation of the position operator.

What we want to do is expand the matrix element of the commutator. First using the definition of the commutator

\begin{equation}\label{eqn:positionOperatorInMomentumSpace:20}
\bra{p’} X P – P X \ket{p”}
=
i \Hbar \braket{p’}{p”}
=
i \Hbar \delta{p’ – p”},
\end{equation}

and then by inserting an identity operation in a momentum space basis

\begin{equation}\label{eqn:positionOperatorInMomentumSpace:40}
\begin{aligned}
\bra{p’} X P – P X \ket{p”}
&=
\int dp
\bra{p’} X \ket{p}\bra{p} P \ket{p”}
-\int dp
\bra{p’} P \ket{p}\bra{p} X \ket{p”} \\
&=
\int dp
\bra{p’} X \ket{p} p \delta(p – p”)
-\int dp
p \delta(p’ – p)
\bra{p} X \ket{p”} \\
&=
\bra{p’} X \ket{p”} p”

p’ \bra{p’} X \ket{p”}.
\end{aligned}
\end{equation}

So we have

\begin{equation}\label{eqn:positionOperatorInMomentumSpace:60}
\bra{p’} X \ket{p”} p”

p’ \bra{p’} X \ket{p”}
=
i \Hbar \delta{p’ – p”}.
\end{equation}

Because the RHS is zero whenever \( p’ \ne p” \), the matrix element \( \bra{p’} X \ket{p”} \) must also include a delta function. Let

\begin{equation}\label{eqn:positionOperatorInMomentumSpace:80}
\bra{p’} X \ket{p”} = \delta(p’ – p”) X(p”).
\end{equation}

Because \ref{eqn:positionOperatorInMomentumSpace:60} is an operator equation that really only takes on meaning when applied to a wave function and integrated, we do that

\begin{equation}\label{eqn:positionOperatorInMomentumSpace:100}
\int dp” \delta(p’ – p”) X(p”) p” \psi(p”)

\int dp” p’ \delta(p’ – p”) X(p”) \psi(p”)
=
\int dp” i \Hbar \delta{p’ – p”} \psi(p”),
\end{equation}

or
\begin{equation}\label{eqn:positionOperatorInMomentumSpace:120}
i \Hbar \psi(p’)
=
X(p’) p’ \psi(p’)

p’
X(p’) \psi(p’).
\end{equation}

Provided \( X(p’) \) operates on everything to its right, this equation is solved by setting

\begin{equation}\label{eqn:positionOperatorInMomentumSpace:140}
\boxed{
X(p’) = i \Hbar \PD{p’}{}.
}
\end{equation}

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

PHY1520H Graduate Quantum Mechanics. Lecture 12: Symmetry (cont.). Taught by Prof. Arun Paramekanti

November 5, 2015 phy1520 No comments , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

These are notes for the UofT course PHY1520, Graduate Quantum Mechanics, taught by Prof. Paramekanti, covering chap. 4 content from [1].

Parity (review)

\begin{equation}\label{eqn:qmLecture12:20}
\hat{\Pi} \hat{x} \hat{\Pi} = – \hat{x}
\end{equation}
\begin{equation}\label{eqn:qmLecture12:40}
\hat{\Pi} \hat{p} \hat{\Pi} = – \hat{p}
\end{equation}

These are polar vectors, in contrast to an axial vector such as \( \BL = \Br \cross \Bp \).

\begin{equation}\label{eqn:qmLecture12:60}
\hat{\Pi}^2 = 1
\end{equation}

\begin{equation}\label{eqn:qmLecture12:80}
\Psi(x) \rightarrow \Psi(-x)
\end{equation}

If \( \antisymmetric{\hat{\Pi}}{\hat{H}} = 0 \) then all the eigenstates are either

  • even: \( \hat{\Pi} \) eigenvalue is \( + 1 \).
  • odd: \( \hat{\Pi} \) eigenvalue is \( – 1 \).

We are done with discrete symmetry operators for now.

Translations

Define a (continuous) translation operator

\begin{equation}\label{eqn:qmLecture12:100}
\hat{T}_\epsilon \ket{x} = \ket{x + \epsilon}
\end{equation}

The action of this operator is sketched in fig. 1.

lecture12Fig1

fig. 1. Translation operation.

 

This is a unitary operator

\begin{equation}\label{eqn:qmLecture12:120}
\hat{T}_{-\epsilon} = \hat{T}_{\epsilon}^\dagger = \hat{T}_{\epsilon}^{-1}
\end{equation}

In a position basis, the action of this operator is

\begin{equation}\label{eqn:qmLecture12:140}
\bra{x} \hat{T}_{\epsilon} \ket{\psi} = \braket{x-\epsilon}{\psi} = \psi(x – \epsilon)
\end{equation}

\begin{equation}\label{eqn:qmLecture12:160}
\Psi(x – \epsilon) \approx \Psi(x) – \epsilon \PD{x}{\Psi}
\end{equation}

\begin{equation}\label{eqn:qmLecture12:180}
\bra{x} \hat{T}_{\epsilon} \ket{\Psi}
= \braket{x}{\Psi} – \frac{\epsilon}{\Hbar} \bra{ x} i \hat{p} \ket{\Psi}
\end{equation}

\begin{equation}\label{eqn:qmLecture12:200}
\hat{T}_{\epsilon} \approx \lr{ 1 – i \frac{\epsilon}{\Hbar} \hat{p} }
\end{equation}

A non-infinitesimal translation can be composed of many small translations, as sketched in fig. 2.

fig. 2. Composition of small translations

fig. 2. Composition of small translations

For \( \epsilon \rightarrow 0, N \rightarrow \infty, N \epsilon = a \), the total translation operator is

\begin{equation}\label{eqn:qmLecture12:220}
\begin{aligned}
\hat{T}_{a}
&= \hat{T}_{\epsilon}^N \\
&= \lim_{\epsilon \rightarrow 0, N \rightarrow \infty, N \epsilon = a }
\lr{ 1 – \frac{\epsilon}{\Hbar} \hat{p} }^N \\
&= e^{-i a \hat{p}/\Hbar}
\end{aligned}
\end{equation}

The momentum \( \hat{p} \) is called a “Generator” generator of translations. If a Hamiltonian \( H \) is translationally invariant, then

\begin{equation}\label{eqn:qmLecture12:240}
\antisymmetric{\hat{T}_{a}}{H} = 0, \qquad \forall a.
\end{equation}

This means that momentum will be a good quantum number

\begin{equation}\label{eqn:qmLecture12:260}
\antisymmetric{\hat{p}}{H} = 0.
\end{equation}

Rotations

Rotations form a non-Abelian group, since the order of rotations \( \hatR_1 \hatR_2 \ne \hatR_2 \hatR_1 \).

Given a rotation acting on a ket

\begin{equation}\label{eqn:qmLecture12:280}
\hatR \ket{\Br} = \ket{R \Br},
\end{equation}

observe that the action of the rotation operator on a wave function is inverted

\begin{equation}\label{eqn:qmLecture12:300}
\bra{\Br} \hatR \ket{\Psi}
=
\bra{R^{-1} \Br} \ket{\Psi}
= \Psi(R^{-1} \Br).
\end{equation}

Example: Z axis normal rotation

Consider an infinitesimal rotation about the z-axis as sketched in fig. 3(a),(b)

lecture12Fig3

fig 3(a). Rotation about z-axis.

fig 3(b). Rotation about z-axis.

fig 3(b). Rotation about z-axis.

\begin{equation}\label{eqn:qmLecture12:320}
\begin{aligned}
x’ &= x – \epsilon y \\
y’ &= y + \epsilon y \\
z’ &= z
\end{aligned}
\end{equation}

The rotated wave function is

\begin{equation}\label{eqn:qmLecture12:340}
\tilde{\Psi}(x,y,z)
= \Psi( x + \epsilon y, y – \epsilon x, z )
=
\Psi( x, y, z )
+
\epsilon y \underbrace{\PD{x}{\Psi}}_{i \hat{p}_x/\Hbar}

\epsilon x \underbrace{\PD{y}{\Psi}}_{i \hat{p}_y/\Hbar}.
\end{equation}

The state must then transform as

\begin{equation}\label{eqn:qmLecture12:360}
\ket{\tilde{\Psi}}
=
\lr{
1
+ i \frac{\epsilon}{\Hbar} \hat{y} \hat{p}_x
– i \frac{\epsilon}{\Hbar} \hat{x} \hat{p}_y
}
\ket{\Psi}.
\end{equation}

Observe that the combination \( \hat{x} \hat{p}_y – \hat{y} \hat{p}_x \) is the \( \hat{L}_z \) component of angular momentum \( \hat{\BL} = \hat{\Br} \cross \hat{\Bp} \), so the infinitesimal rotation can be written

\begin{equation}\label{eqn:qmLecture12:380}
\boxed{
\hatR_z(\epsilon) \ket{\Psi}
=
\lr{ 1 – i \frac{\epsilon}{\Hbar} \hat{L}_z } \ket{\Psi}.
}
\end{equation}

For a finite rotation \( \epsilon \rightarrow 0, N \rightarrow \infty, \phi = \epsilon N \), the total rotation is

\begin{equation}\label{eqn:qmLecture12:420}
\hatR_z(\phi)
=
\lr{ 1 – \frac{i \epsilon}{\Hbar} \hat{L}_z }^N,
\end{equation}

or
\begin{equation}\label{eqn:qmLecture12:440}
\boxed{
\hatR_z(\phi)
=
e^{-i \frac{\phi}{\Hbar} \hat{L}_z}.
}
\end{equation}

Note that \( \antisymmetric{\hat{L}_x}{\hat{L}_y} \ne 0 \).

By construction using Euler angles or any other method, a general rotation will include contributions from components of all the angular momentum operator, and will have the structure

\begin{equation}\label{eqn:qmLecture12:480}
\boxed{
\hatR_\ncap(\phi)
=
e^{-i \frac{\phi}{\Hbar} \lr{ \hat{\BL} \cdot \ncap }}.
}
\end{equation}

Rotationally invariant \( \hat{H} \).

Given a rotationally invariant Hamiltonian

\begin{equation}\label{eqn:qmLecture12:520}
\antisymmetric{\hat{R}_\ncap(\phi)}{\hat{H}} = 0 \qquad \forall \ncap, \phi,
\end{equation}

then every

\begin{equation}\label{eqn:qmLecture12:540}
\antisymmetric{\BL \cdot \ncap}{\hat{H}} = 0,
\end{equation}

or
\begin{equation}\label{eqn:qmLecture12:560}
\antisymmetric{L_i}{\hat{H}} = 0,
\end{equation}

Non-Abelian implies degeneracies in the spectrum.

Time-reversal

Imagine that we have something moving along a curve at time \( t = 0 \), and ending up at the final position at time \( t = t_f \).

fig. 4. Time reversal trajectory.

fig. 4. Time reversal trajectory.

Imagine that we flip the direction of motion (i.e. flipping the velocity) and run time backwards so the final-time state becomes the initial state.

If the time reversal operator is designated \( \hat{\Theta} \), with operation

\begin{equation}\label{eqn:qmLecture12:580}
\hat{\Theta} \ket{\Psi} = \ket{\tilde{\Psi}},
\end{equation}

so that

\begin{equation}\label{eqn:qmLecture12:600}
\hat{\Theta}^{-1} e^{-i \hat{H} t/\Hbar} \hat{\Theta} \ket{\Psi(t)} = \ket{\Psi(0)},
\end{equation}

or

\begin{equation}\label{eqn:qmLecture12:620}
\hat{\Theta}^{-1} e^{-i \hat{H} t/\Hbar} \hat{\Theta} \ket{\Psi(0)} = \ket{\Psi(-t)}.
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

A curious proof of the Baker-Campbell-Hausdorff formula

November 4, 2015 phy1520 No comments , , ,

[Click here for a PDF of this post with nicer formatting]

Equation (39) of [1] states the Baker-Campbell-Hausdorff formula for two operators \( a, b\) that commute with their commutator \( \antisymmetric{a}{b} \)

\begin{equation}\label{eqn:bakercambell:20}
e^a e^b = e^{a + b + \antisymmetric{a}{b}/2},
\end{equation}

and provides the outline of an interesting method of proof. That method is to consider the derivative of

\begin{equation}\label{eqn:bakercambell:40}
f(\lambda) = e^{\lambda a} e^{\lambda b} e^{-\lambda (a + b)},
\end{equation}

That derivative is
\begin{equation}\label{eqn:bakercambell:60}
\begin{aligned}
\frac{df}{d\lambda}
&=
e^{\lambda a} a e^{\lambda b} e^{-\lambda (a + b)}
+
e^{\lambda a} b e^{\lambda b} e^{-\lambda (a + b)}

e^{\lambda a} b e^{\lambda b} (a + b)e^{-\lambda (a + b)} \\
&=
e^{\lambda a} \lr{
a e^{\lambda b}
+
b e^{\lambda b}

e^{\lambda b} (a+b)
}
e^{-\lambda (a + b)} \\
&=
e^{\lambda a} \lr{
\antisymmetric{a}{e^{\lambda b}}
+
{\antisymmetric{b}{e^{\lambda b}}}
}
e^{-\lambda (a + b)} \\
&=
e^{\lambda a}
\antisymmetric{a}{e^{\lambda b}}
e^{-\lambda (a + b)}
.
\end{aligned}
\end{equation}

The commutator above is proportional to \( \antisymmetric{a}{b} \)

\begin{equation}\label{eqn:bakercambell:80}
\begin{aligned}
\antisymmetric{a}{e^{\lambda b}}
&=
\sum_{k=0}^\infty \frac{\lambda^k}{k!} \antisymmetric{a}{ b^k } \\
&=
\sum_{k=0}^\infty \frac{\lambda^k}{k!} k b^{k-1} \antisymmetric{a}{b} \\
&=
\lambda \sum_{k=1}^\infty \frac{\lambda^{k-1}}{(k-1)!} b^{k-1}
\antisymmetric{a}{b} \\
&=
\lambda e^{\lambda b} \antisymmetric{a}{b},
\end{aligned}
\end{equation}

so

\begin{equation}\label{eqn:bakercambell:100}
\frac{df}{d\lambda} = \lambda \antisymmetric{a}{b} f.
\end{equation}

To get the above, we should also do the induction demonstration for \( \antisymmetric{a}{ b^k } = k b^{k-1} \antisymmetric{a}{b} \).

This clearly holds for \( k = 0,1 \). For any other \( k \) we have

\begin{equation}\label{eqn:bakercambell:120}
\begin{aligned}
\antisymmetric{a}{b^{k+1}}
&=
a b^{k+1} – b^{k+1} a \\
&=
\lr{ \antisymmetric{a}{b^{k}} + b^k a
} b – b^{k+1} a \\
&=
k b^{k-1} \antisymmetric{a}{b} b
+ b^k \lr{ \antisymmetric{a}{b} + {b a} }
– {b^{k+1} a} \\
&=
k b^{k} \antisymmetric{a}{b}
+ b^k \antisymmetric{a}{b} \\
&=
(k+1) b^k \antisymmetric{a}{b}.
\end{aligned}
\end{equation}

Observe that \ref{eqn:bakercambell:100} is solved by

\begin{equation}\label{eqn:bakercambell:140}
f = e^{\lambda^2\antisymmetric{a}{b}/2},
\end{equation}

which gives

\begin{equation}\label{eqn:bakercambell:160}
e^{\lambda^2 \antisymmetric{a}{b}/2} =
e^{\lambda a} e^{\lambda b} e^{-\lambda (a + b)}.
\end{equation}

Right multiplication by \( e^{\lambda (a + b)} \) which commutes with \( e^{\lambda^2 \antisymmetric{a}{b}/2} \) and setting \( \lambda = 1 \) recovers \ref{eqn:bakercambell:20} as desired.

What I wonder looking at this, is what thought process led to trying this in the first place? This is not what I would consider an obvious approach to demonstrating this identity.

References

[1] Roy J Glauber. Some notes on multiple-boson processes. Physical Review, 84 (3), 1951.

Plane wave ground state expectation for SHO

October 18, 2015 phy1520 No comments , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Problem [1] 2.18 is, for a 1D SHO, show that

\begin{equation}\label{eqn:exponentialExpectationGroundState:20}
\bra{0} e^{i k x} \ket{0} = \exp\lr{ -k^2 \bra{0} x^2 \ket{0}/2 }.
\end{equation}

Despite the simple appearance of this problem, I found this quite involved to show. To do so, start with a series expansion of the expectation

\begin{equation}\label{eqn:exponentialExpectationGroundState:40}
\bra{0} e^{i k x} \ket{0}
=
\sum_{m=0}^\infty \frac{(i k)^m}{m!} \bra{0} x^m \ket{0}.
\end{equation}

Let

\begin{equation}\label{eqn:exponentialExpectationGroundState:60}
X = \lr{ a + a^\dagger },
\end{equation}

so that

\begin{equation}\label{eqn:exponentialExpectationGroundState:80}
x
= \sqrt{\frac{\Hbar}{2 \omega m}} X
= \frac{x_0}{\sqrt{2}} X.
\end{equation}

Consider the first few values of \( \bra{0} X^n \ket{0} \)

\begin{equation}\label{eqn:exponentialExpectationGroundState:100}
\begin{aligned}
\bra{0} X \ket{0}
&=
\bra{0} \lr{ a + a^\dagger } \ket{0} \\
&=
\braket{0}{1} \\
&=
0,
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:exponentialExpectationGroundState:120}
\begin{aligned}
\bra{0} X^2 \ket{0}
&=
\bra{0} \lr{ a + a^\dagger }^2 \ket{0} \\
&=
\braket{1}{1} \\
&=
1,
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:exponentialExpectationGroundState:140}
\begin{aligned}
\bra{0} X^3 \ket{0}
&=
\bra{0} \lr{ a + a^\dagger }^3 \ket{0} \\
&=
\bra{1} \lr{ \sqrt{2} \ket{2} + \ket{0} } \\
&=
0.
\end{aligned}
\end{equation}

Whenever the power \( n \) in \( X^n \) is even, the braket can be split into a bra that has only contributions from odd eigenstates and a ket with even eigenstates. We conclude that \( \bra{0} X^n \ket{0} = 0 \) when \( n \) is odd.

Noting that \( \bra{0} x^2 \ket{0} = \ifrac{x_0^2}{2} \), this leaves

\begin{equation}\label{eqn:exponentialExpectationGroundState:160}
\begin{aligned}
\bra{0} e^{i k x} \ket{0}
&=
\sum_{m=0}^\infty \frac{(i k)^{2 m}}{(2 m)!} \bra{0} x^{2m} \ket{0} \\
&=
\sum_{m=0}^\infty \frac{(i k)^{2 m}}{(2 m)!} \lr{ \frac{x_0^2}{2} }^m \bra{0} X^{2m} \ket{0} \\
&=
\sum_{m=0}^\infty \frac{1}{(2 m)!} \lr{ -k^2 \bra{0} x^2 \ket{0} }^m \bra{0} X^{2m} \ket{0}.
\end{aligned}
\end{equation}

This problem is now reduced to showing that

\begin{equation}\label{eqn:exponentialExpectationGroundState:180}
\frac{1}{(2 m)!} \bra{0} X^{2m} \ket{0} = \inv{m! 2^m},
\end{equation}

or

\begin{equation}\label{eqn:exponentialExpectationGroundState:200}
\begin{aligned}
\bra{0} X^{2m} \ket{0}
&= \frac{(2m)!}{m! 2^m} \\
&= \frac{ (2m)(2m-1)(2m-2) \cdots (2)(1) }{2^m m!} \\
&= \frac{ 2^m (m)(2m-1)(m-1)(2m-3)(m-2) \cdots (2)(3)(1)(1) }{2^m m!} \\
&= (2m-1)!!,
\end{aligned}
\end{equation}

where \( n!! = n(n-2)(n-4)\cdots \).

It looks like \( \bra{0} X^{2m} \ket{0} \) can be expanded by inserting an identity operator and proceeding recursively, like

\begin{equation}\label{eqn:exponentialExpectationGroundState:220}
\begin{aligned}
\bra{0} X^{2m} \ket{0}
&=
\bra{0} X^2 \lr{ \sum_{n=0}^\infty \ket{n}\bra{n} } X^{2m-2} \ket{0} \\
&=
\bra{0} X^2 \lr{ \ket{0}\bra{0} + \ket{2}\bra{2} } X^{2m-2} \ket{0} \\
&=
\bra{0} X^{2m-2} \ket{0} + \bra{0} X^2 \ket{2} \bra{2} X^{2m-2} \ket{0}.
\end{aligned}
\end{equation}

This has made use of the observation that \( \bra{0} X^2 \ket{n} = 0 \) for all \( n \ne 0,2 \). The remaining term includes the factor

\begin{equation}\label{eqn:exponentialExpectationGroundState:240}
\begin{aligned}
\bra{0} X^2 \ket{2}
&=
\bra{0} \lr{a + a^\dagger}^2 \ket{2} \\
&=
\lr{ \bra{0} + \sqrt{2} \bra{2} } \ket{2} \\
&=
\sqrt{2},
\end{aligned}
\end{equation}

Since \( \sqrt{2} \ket{2} = \lr{a^\dagger}^2 \ket{0} \), the expectation of interest can be written

\begin{equation}\label{eqn:exponentialExpectationGroundState:260}
\bra{0} X^{2m} \ket{0}
=
\bra{0} X^{2m-2} \ket{0} + \bra{0} a^2 X^{2m-2} \ket{0}.
\end{equation}

How do we expand the second term. Let’s look at how \( a \) and \( X \) commute

\begin{equation}\label{eqn:exponentialExpectationGroundState:280}
\begin{aligned}
a X
&=
\antisymmetric{a}{X} + X a \\
&=
\antisymmetric{a}{a + a^\dagger} + X a \\
&=
\antisymmetric{a}{a^\dagger} + X a \\
&=
1 + X a,
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:exponentialExpectationGroundState:300}
\begin{aligned}
a^2 X
&=
a \lr{ a X } \\
&=
a \lr{ 1 + X a } \\
&=
a + a X a \\
&=
a + \lr{ 1 + X a } a \\
&=
2 a + X a^2.
\end{aligned}
\end{equation}

Proceeding to expand \( a^2 X^n \) we find
\begin{equation}\label{eqn:exponentialExpectationGroundState:320}
\begin{aligned}
a^2 X^3 &= 6 X + 6 X^2 a + X^3 a^2 \\
a^2 X^4 &= 12 X^2 + 8 X^3 a + X^4 a^2 \\
a^2 X^5 &= 20 X^3 + 10 X^4 a + X^5 a^2 \\
a^2 X^6 &= 30 X^4 + 12 X^5 a + X^6 a^2.
\end{aligned}
\end{equation}

It appears that we have
\begin{equation}\label{eqn:exponentialExpectationGroundState:340}
\antisymmetric{a^2 X^n}{X^n a^2} = \beta_n X^{n-2} + 2 n X^{n-1} a,
\end{equation}

where

\begin{equation}\label{eqn:exponentialExpectationGroundState:360}
\beta_n = \beta_{n-1} + 2 (n-1),
\end{equation}

and \( \beta_2 = 2 \). Some goofing around shows that \( \beta_n = n(n-1) \), so the induction hypothesis is

\begin{equation}\label{eqn:exponentialExpectationGroundState:380}
\antisymmetric{a^2 X^n}{X^n a^2} = n(n-1) X^{n-2} + 2 n X^{n-1} a.
\end{equation}

Let’s check the induction
\begin{equation}\label{eqn:exponentialExpectationGroundState:400}
\begin{aligned}
a^2 X^{n+1}
&=
a^2 X^{n} X \\
&=
\lr{ n(n-1) X^{n-2} + 2 n X^{n-1} a + X^n a^2 } X \\
&=
n(n-1) X^{n-1} + 2 n X^{n-1} a X + X^n a^2 X \\
&=
n(n-1) X^{n-1} + 2 n X^{n-1} \lr{ 1 + X a } + X^n \lr{ 2 a + X a^2 } \\
&=
n(n-1) X^{n-1} + 2 n X^{n-1} + 2 n X^{n} a
+ 2 X^n a
+ X^{n+1} a^2 \\
&=
X^{n+1} a^2 + (2 + 2 n) X^{n} a + \lr{ 2 n + n(n-1) } X^{n-1} \\
&=
X^{n+1} a^2 + 2(n + 1) X^{n} a + (n+1) n X^{n-1},
\end{aligned}
\end{equation}

which concludes the induction, giving

\begin{equation}\label{eqn:exponentialExpectationGroundState:420}
\bra{ 0 } a^2 X^{n} \ket{0 } = n(n-1) \bra{0} X^{n-2} \ket{0},
\end{equation}

and

\begin{equation}\label{eqn:exponentialExpectationGroundState:440}
\bra{0} X^{2m} \ket{0}
=
\bra{0} X^{2m-2} \ket{0} + (2m-2)(2m-3) \bra{0} X^{2m-4} \ket{0}.
\end{equation}

Let

\begin{equation}\label{eqn:exponentialExpectationGroundState:460}
\sigma_{n} = \bra{0} X^n \ket{0},
\end{equation}

so that the recurrence relation, for \( 2n \ge 4 \) is

\begin{equation}\label{eqn:exponentialExpectationGroundState:480}
\sigma_{2n} = \sigma_{2n -2} + (2n-2)(2n-3) \sigma_{2n -4}
\end{equation}

We want to show that this simplifies to

\begin{equation}\label{eqn:exponentialExpectationGroundState:500}
\sigma_{2n} = (2n-1)!!
\end{equation}

The first values are

\begin{equation}\label{eqn:exponentialExpectationGroundState:540}
\sigma_0 = \bra{0} X^0 \ket{0} = 1
\end{equation}
\begin{equation}\label{eqn:exponentialExpectationGroundState:560}
\sigma_2 = \bra{0} X^2 \ket{0} = 1
\end{equation}

which gives us the right result for the first term in the induction

\begin{equation}\label{eqn:exponentialExpectationGroundState:580}
\begin{aligned}
\sigma_4
&= \sigma_2 + 2 \times 1 \times \sigma_0 \\
&= 1 + 2 \\
&= 3!!
\end{aligned}
\end{equation}

For the general induction term, consider

\begin{equation}\label{eqn:exponentialExpectationGroundState:600}
\begin{aligned}
\sigma_{2n + 2}
&= \sigma_{2n} + 2 n (2n – 1) \sigma_{2n -2} \\
&= (2n-1)!! + 2n ( 2n – 1) (2n -3)!! \\
&= (2n + 1) (2n -1)!! \\
&= (2n + 1)!!,
\end{aligned}
\end{equation}

which completes the final induction. That was also the last thing required to complete the proof, so we are done!

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.