math and physics play

Area within closed boundary

August 11, 2024 math and physics play , , , , , ,

[Click here for a PDF version of this post]

Motivation.

On vacation I was reading some more of [1]. It was mentioned in passing that the area contained within a closed parameterized curve is given by
\begin{equation}\label{eqn:containedArea:20}
A = \inv{2} \int_{t_0}^{t_1} \lr{x y’ – y x’} dt,
\end{equation}
where \( x = x(t), y = y(t), t \in [t_0, t_1] \). This has the look of a Stokes theorem coordinate expansion (specifically, the Green’s theorem special case of Stokes’), but with somewhat mysterious looking factor of one half out in front. My aim in this post is to understand the origins of this area relationship, and play with it a bit.

Circular coordinates example.

The book suggests that the reader verify this for a circular parameterization, so we’ll do that here too.

Let
\begin{equation}\label{eqn:containedArea:40}
\begin{aligned}
x(t) &= r \cos t \\
y(t) &= r \sin t,
\end{aligned}
\end{equation}
where \( t \in [0, 2 \pi] \). Plugging in this, we have
\begin{equation}\label{eqn:containedArea:60}
\begin{aligned}
A
&= \inv{2} \int_0^{2 \pi} \lr{ r \cos t \lr{ r \cos t } – r \sin t \lr{ – r \sin t } } dt \\
&= \frac{r^2}{2} \int_0^{2 \pi} \lr{ \cos^2 t + \sin^2 t } dt \\
&= \frac{2 \pi r^2}{2} \\
&= \pi r^2.
\end{aligned}
\end{equation}
This simple example works out.

Piecewise linear parametrization example.

One parameterization of the unit parallelogram depicted in fig. 1 is

\begin{equation}\label{eqn:containedArea:340}
\begin{aligned}
(x,y) &= (t, 0),\quad t \in [0,1] \\
&= (t, t – 1),\quad t \in [1,2] \\
&= (4 – t, 1),\quad t \in [2,3] \\
&= (4 – t, 4 – t),\quad t \in [3,4]
\end{aligned}
\end{equation}

fig. 1. Parallelogram with unit area.

fig. 1. Parallelogram with unit area.

Respective evaluating of \( x y’ – y x’ \) in each of these regions gives
\begin{equation}\label{eqn:containedArea:360}
\begin{aligned}
(t) (0) – (0)(0) &= 0 \\
(t) (1) – (t-1)(1) &= 1 \\
(4-t)(0) – (1)(-1) &= 1 \\
(4-t)(-1) – (4-t)(-1) &= 0,
\end{aligned}
\end{equation}
and integrating
\begin{equation}\label{eqn:containedArea:380}
\inv{2} \int_0^4 \lr{ x y’ – y x’} dt = \frac{2}{2} = 1,
\end{equation}
as expected. In this example, the directional derivative is not continuous at the corners of the parallelogram, but that is not a requirement (as it should not be, as the area is well defined despite any corners.)

Can we discover this relationship using the Jacobian?

Graphically, I can imagine that we could find this area relationship, by considering a parameterization of a family of nested closed curves, as depicted in fig. 2.

fig. 2. Family of nested closed curves.

fig. 2. Family of nested closed curves.

For such a parameterization, calculating the area is just a Jacobian evaluation
\begin{equation}\label{eqn:containedArea:80}
\begin{aligned}
A
&= \iint \frac{\partial(x, y)}{\partial(u,t)} du dt \\
&= \iint \lr{ \PD{u}{x} \PD{t}{y} – \PD{u}{y} \PD{t}{x} } du dt \\
&= \iint \lr{ \PD{u}{x} y’ – \PD{u}{y} x’ } du dt.
\end{aligned}
\end{equation}
Let’s try to eliminate the \( u \) derivatives using integration by parts, and see what we get.
\begin{equation}\label{eqn:containedArea:100}
\begin{aligned}
A
&= \iint \lr{ \PD{u}{x} y’ – \PD{u}{y} x’ } du dt \\
&= \iint \frac{d}{du} \lr{ x y’ – y x’ } du dt – \iint \lr{ x \PD{u}{y’} – y \PD{u}{x’} } du dt \\
&= \int \lr{ x y’ – y x’ } dt – \iint \lr{ x \PD{u}{y’} – y \PD{u}{x’} } du dt.
\end{aligned}
\end{equation}
This is interesting, as we find the area equation that we are interested (times two), but we have a strange new area equation. Essentially, we have found, assuming we trust the claim in the book, that
\begin{equation}\label{eqn:containedArea:120}
A = 2 A – \iint \lr{ x \PD{u}{y’} – y \PD{u}{x’} } du dt,
\end{equation}
so it seems that the area can also be expressed as
\begin{equation}\label{eqn:containedArea:140}
A = \iint \lr{ x \frac{\partial^2 y}{\partial u \partial t} – y \frac{\partial^2 x}{\partial u \partial t} } du dt.
\end{equation}
Let’s again use the circular parameterization to verify that this works. I won’t try to prove this directly, but instead, we’ll use Stokes’ theorem to prove the stated result, from which we get this second derivative area formula as a side effect by virtue of our integration by parts expansion above.

For the circular parameterization, we have
\begin{equation}\label{eqn:containedArea:160}
\begin{aligned}
A
&= \int_{r = 0}^R dr \int_{t = 0}^{2 \pi} dt \lr{ x \frac{\partial^2 y}{\partial r \partial t} – y \frac{\partial^2 x}{\partial r \partial t} } \\
&= \int_{r = 0}^R dr \int_{t = 0}^{2 \pi} dt \lr{ r \cos t \frac{\partial \sin t}{\partial t} – r \sin t \frac{\partial \cos t}{\partial t} } \\
&= \int_{r = 0}^R r dr \int_{t = 0}^{2 \pi} dt \lr{ \cos^2 t + \sin^2 t } \\
&= \frac{R^2}{2} 2 \pi \\
&= \pi R^2.
\end{aligned}
\end{equation}
This checks out, at least for this one specific circular parameterization.

Area formula derivation using Stokes’ theorem.

Theorem 1.1: Green’s theorem.

\begin{equation}\label{eqn:containedArea:260}
\iint dx dy \lr{ \PD{x}{M} – \PD{y}{L} } = \oint L dx + M dy.
\end{equation}

Start proof:

We start with the general two parameter integration theorem
\begin{equation}\label{eqn:containedArea:180}
\iint F d^2 \Bx \lrpartial G = -\oint F d\Bx G,
\end{equation}
set \( F = 1, G = \Bf \), and apply scalar selection
\begin{equation}\label{eqn:containedArea:200}
\iint \gpgradezero{ d^2 \Bx \lrpartial \Bf } = -\oint d\Bx \cdot \Bf,
\end{equation}
to find the two parameter form of Stokes’ theorem
\begin{equation}\label{eqn:containedArea:220}
\iint d^2 \Bx \cdot \lr{ \spacegrad \wedge \Bf } = -\oint d\Bx \cdot \Bf,
\end{equation}

With a planar parameterization, say \( \Bf = L \Be_1 + M \Be_2 \), we have \( d\Bx \cdot \Bf = L dx + M dy \), and for the LHS
\begin{equation}\label{eqn:containedArea:240}
\begin{aligned}
\iint d^2 \Bx \cdot \lr{ \spacegrad \wedge \Bf }
&=
\iint dx dy \Be_{12}^2
\begin{vmatrix}
\partial_1 & \partial_2 \\
L & M
\end{vmatrix} \\
&=
-\iint dx dy \lr{ \PD{x}{M} – \PD{y}{L} }.
\end{aligned}
\end{equation}

End proof.

Parameterized area equation.

If we wish to evaluate an elementary area, we can pick \( L, M \) such that \( \PDi{x}{M} – \PDi{y}{L} = 1 \). One such selection is
\begin{equation}\label{eqn:containedArea:280}
\begin{aligned}
M &= \frac{x}{2} \\
L &= -\frac{y}{2},
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:containedArea:300}
A = \inv{2} \oint -y dx + x dy = \inv{2} \int \lr{ x y’ – y x’ } dt.
\end{equation}
Clearly, there are other possible choices of \( L, M \) that we could use to find alternate area equations, but this choice seems to be independent of the shape of the region.

References

[1] F.W. Byron and R.W. Fuller. Mathematics of Classical and Quantum Physics. Dover Publications, 1992.

Hyperbolic sine representation of mth Fibonacci number

July 26, 2024 math and physics play , , , , ,

[Click here for a PDF version of this post]

I saw a funky looking formula for the mth Fibonacci number on twitter
\begin{equation}\label{eqn:fibonacci_sinh:20}
F_m = \frac{2}{\sqrt{5} i^m} \sinh\lr{ m \ln\lr{i\phi} },
\end{equation}
where
\begin{equation}\label{eqn:fibonacci_sinh:60}
\phi = \frac{ 1 + \sqrt{5} }{2},
\end{equation}
is the golden ratio.

This certainly doesn’t look like it’s a representation of the sequence
\begin{equation}\label{eqn:fibonacci_sinh:40}
1, 1, 2, 3, 5, 8, 13, 21, 34, 55, \cdots
\end{equation}
We can verify that it works in Mathematica, as seen in fig. 1.

fig. 1. Verification of hyperbolic sine representation of mth Fibonacci numbe

Recall that we previously found this formula for the mth Fibonacci number
\begin{equation}\label{eqn:fibonacci_sinh:80}
F_m = \inv{\sqrt{5}} \lr{ \phi^m – { \bar{\phi}}^m },
\end{equation}
where \( \bar{\phi} \) is the conjugate of the golden ratio
\begin{equation}\label{eqn:fibonacci_sinh:100}
\bar{\phi} = \frac{ 1 – \sqrt{5} }{2}.
\end{equation}

Let’s see how these are equivalent. First observe that the golden conjugate is easily related to the inverse of the golden ratio
\begin{equation}\label{eqn:fibonacci_sinh:120}
\begin{aligned}
\inv{\phi}
&=
\frac{2}{1 + \sqrt{5}} \\
&=
\frac{2\lr{ 1 – \sqrt{5}} }{1^2 – \lr{\sqrt{5}}^2 } \\
&=
-\frac{1 – \sqrt{5} }{2} \\
&=
-\bar{\phi}.
\end{aligned}
\end{equation}
Substitution gives
\begin{equation}\label{eqn:fibonacci_sinh:140}
F_m = \inv{\sqrt{5}} \lr{ \phi^m – \lr{\frac{-1}{\phi}}^m }.
\end{equation}
Multiplying by \( i^m \), we have
\begin{equation}\label{eqn:fibonacci_sinh:160}
\begin{aligned}
i^m F_m
&= \inv{\sqrt{5}} \lr{ i^m \phi^m – \inv{(-i)^m} \lr{\frac{-1}{\phi}}^m } \\
&= \inv{\sqrt{5}} \lr{ \lr{ i \phi} ^m – \lr{i \phi}^{-m} } \\
\end{aligned}
\end{equation}

We can write any exponent in terms of \( e \)
\begin{equation}\label{eqn:fibonacci_sinh:180}
a^m = e^{\ln a^m} = e^{m \ln a},
\end{equation}
so
\begin{equation}\label{eqn:fibonacci_sinh:200}
\begin{aligned}
i^m F_m
&= \inv{\sqrt{5}} \lr{ e^{m \ln \lr{ i \phi}} – e^{-m \ln\lr{i \phi} } } \\
&= \inv{\sqrt{5}} 2 \sinh\lr{ m \ln \lr{ i \phi } },
\end{aligned}
\end{equation}
as we wanted to show. It’s a bit strange looking, but we see why it works.

A fun cube root simplification problem.

July 14, 2024 math and physics play , , , , , ,

[Click here for a PDF version of this post]

I saw a thumbnail of a cube root simplification problem on youtube, and tried it myself before watching the video. I ended up needing two hints from the video to solve the problem.  The problem was to simplify
\begin{equation}\label{eqn:cuberootsimplify:20}
x = \lr{ \sqrt{5} – 2 }^{1/3}.
\end{equation}
My guess was that the solution was of the form
\begin{equation}\label{eqn:cuberootsimplify:40}
x = a \sqrt{5} + b,
\end{equation}
where \(a,b\) are rational numbers. I say that because, if we cube that expression for \(x\) we get
\begin{equation}\label{eqn:cuberootsimplify:60}
x^3 = a^3 5 \sqrt{5} + 15 a^2 b + 3 \sqrt{5} a b^2 + b^3,
\end{equation}
so if we can find rational solutions to the system
\begin{equation}\label{eqn:cuberootsimplify:80}
\begin{aligned}
\sqrt{5} \lr{ 5 a^3 + 3 a b^2 } &= \sqrt{5} \\
15 a^2 b + b^3 &= -2.
\end{aligned}
\end{equation}
My problem now was that this doesn’t look like it’s particularly easy to solve. Mathematica can do it easily, as shown in fig. 1.

fig. 1. Mathematica simultaneous rational cubic reduction.

But if I wanted to cheat, I can just ask Mathematica to simplify the expression, as in fig. 2

fig. 2. Direct Mathematica simplification.

So, back to the drawing board. One thing that we can notice is that the expression in the cube root, looks like it could be recast in terms of a difference of squares
\begin{equation}\label{eqn:cuberootsimplify:100}
\sqrt{5} – 2 = \sqrt{5} – \sqrt{4}.
\end{equation}
Let’s let \( a = \sqrt{5}, b = \sqrt{4} \), so that
\begin{equation}\label{eqn:cuberootsimplify:120}
\begin{aligned}
\sqrt{5} – \sqrt{4} &=
a – b \\
&= \frac{a^2 – b^2}{a + b} \\
&= \frac{5 – 4}{\sqrt{5} + \sqrt{4} }.
\end{aligned}
\end{equation}
This shows that we have a sort of “conjugate” relationship for this difference
\begin{equation}\label{eqn:cuberootsimplify:140}
\sqrt{5} – 2 = \inv{\sqrt{5} + 2}.
\end{equation}
Surely this can be exploited somehow in the simplification process. I was stumped at this point, and didn’t see where to go with this, so I cheated a different way (not using Mathematica this time) and looked at the video to see where he went with it. Sure enough, he used these related pairs, and let
\begin{equation}\label{eqn:cuberootsimplify:160}
\begin{aligned}
x &= \lr{ \sqrt{5} – 2 }^{1/3} \\
y &= \lr{ \sqrt{5} + 2 }^{1/3}.
\end{aligned}
\end{equation}
Without looking further, let’s see what we can do with these. Clearly, we’d like to cube them, so that we seek solutions to
\begin{equation}\label{eqn:cuberootsimplify:180}
\begin{aligned}
x^3 &= \sqrt{5} – 2 \\
y^3 &= \sqrt{5} + 2.
\end{aligned}
\end{equation}
Sums and differences look like they would be interesting
\begin{equation}\label{eqn:cuberootsimplify:200}
\begin{aligned}
x^3 + y^3 &= 2 \sqrt{5} \\
y^3 – x^3 &= 4.
\end{aligned}
\end{equation}
We’ve also seen that
\begin{equation}\label{eqn:cuberootsimplify:220}
x y = 1,
\end{equation}
so just like the initial guess problem, we are left with having to solve two simulateous cubics, but this time the cubics are simpler, and we have a constraint condition that should be helpful.
My next guess was to form the cubes of \( x \pm y \), and use our constraint equation \( x y = 1 \) to simplify that. We find
\begin{equation}\label{eqn:cuberootsimplify:240}
\begin{aligned}
\lr{ x + y }^3
&= x^3 + 3 x^2 y + 3 x y^2 + y^3 \\
&= 2 \sqrt{5} + 3 \lr{ x + y} x y \\
&= 2 \sqrt{5} + 3 \lr{ x + y },
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:cuberootsimplify:260}
\begin{aligned}
\lr{ y – x }^3
&= y^3 – 3 y^2 x + 3 y x^2 – x^3 \\
&= 4 – 3 \lr{ y – x } x y \\
&= 4 – 3 \lr{ y – x }.
\end{aligned}
\end{equation}
We can now let \( u = x + y, v = y – x \), and have a pair of independent equations to solve
\begin{equation}\label{eqn:cuberootsimplify:280}
\begin{aligned}
u^3 &= 2 \sqrt{5} + 3 u \\
v^3 &= 4 – 3 v.
\end{aligned}
\end{equation}
However, we still have cubic equations to solve, neither of which look particularly fun to reduce. I went around in circles from here and didn’t make much headway, and eventually went back to the video to see what he did. He ended up with an equivalent to my equation for \( v \) above, but I actually got there much more directly (my \( v \) was his \( -u \), so the exact steps he used differed.) His basic technique was to note that \( 4 = 3 + 1 \) so he looked for factors with \( 3 \) and \( 1 \) terms. In my case, that is equivalent to the observation that \( v = 1 \) is a root to the cubic in \( v \). So, we want to factor out \( v – 1 \) from
\begin{equation}\label{eqn:cuberootsimplify:300}
v^3 + 3 v – 4 = 0,
\end{equation}
Long dividing this by \( v -1 \) gives
\begin{equation}\label{eqn:cuberootsimplify:320}
\lr{ v – 1 } \lr{ v^2 + v + 4 } = 0.
\end{equation}
Completing the square for the quadratic factor gives
\begin{equation}\label{eqn:cuberootsimplify:340}
\lr{v + \inv{2} }^2 = -4 – \inv{4},
\end{equation}
which has only complex solutions (and we want a positive real solution.) Equating the remaining factor to zero, and reminding ourselves about our \( x y \) constraint, we are now left with
\begin{equation}\label{eqn:cuberootsimplify:360}
\begin{aligned}
v = y – x &= 1,
x y &= 1.
\end{aligned}
\end{equation}
Solving both for \( y \) gives
\begin{equation}\label{eqn:cuberootsimplify:380}
y = x + 1 = \inv{x},
\end{equation}
or
\begin{equation}\label{eqn:cuberootsimplify:400}
x^2 + x = 1,
\end{equation}
or
\begin{equation}\label{eqn:cuberootsimplify:420}
\lr{ x + \inv{2} }^2 = 1 – \inv{4} = \frac{5}{4}.
\end{equation}
We are left with two possible solutions for \( x \)
\begin{equation}\label{eqn:cuberootsimplify:440}
x = -\inv{2} \pm \frac{\sqrt{5}}{2},
\end{equation}
and we can now discard the negative solution, and find
\begin{equation}\label{eqn:cuberootsimplify:460}
x = \frac{ \sqrt{5} – 1 }{2},
\end{equation}
matching the answer that we’d found with the Mathematica cheat earlier.

Seeing the effort required to simplify this makes me impressed once again with Mathematica. I wonder what algorithm it uses to do the simplification?

A kind of fun high school physics collision problem, generalized slightly.

June 15, 2024 math and physics play , , , ,

[Click here for a PDF version of this post]

fig. 1. The collision problem.

Karl’s studying for his grade 12 physics final, and I picked out some problems from his text [1] for him to work on. Here’s one, fig. 1, that he made a numerical error with.

I solved this two ways, the first was quick and dirty using Mathematica, so he could check his answer against a number, and then while he was working on it, I also tried it on paper. I found the specific numeric values annoying to work with, so tackled the slightly more general problem of an object of mass \( m_1 \) colliding with an object of mass \( m_2 \) initially at rest, and determined the final velocities of both.

If we want to solve this, we start with a plain old conservation of energy relationship, with initial potential energy, equal to pre-collision kinetic energy
\begin{equation}\label{eqn:collisionproblem:20}
m_1 g h = \inv{2} m_1 v^2,
\end{equation}
where for this problem \( h = 3 – 3 \cos(\pi/3) = 1.5 \,\textrm{m} \), and \( m_1 = 4 \,\textrm{kg} \). This gives us big ball’s pre-collision velocity
\begin{equation}\label{eqn:collisionproblem:40}
v = \sqrt{2 g h}.
\end{equation}

For the collision part of the problem, we have energy and momentum balance equations
\begin{equation}\label{eqn:collisionproblem:60}
\begin{aligned}
\inv{2} m_1 v^2 &= \inv{2} m_1 v_1^2 + \inv{2} m_2 v_2^2 \\
m_1 v &= m_1 v_1 + m_2 v_2.
\end{aligned}
\end{equation}
Clearly, the ratio of masses is more interesting than the masses themselves, so let’s write
\begin{equation}\label{eqn:collisionproblem:80}
\mu = \frac{m_1}{m_2}.
\end{equation}
For the specific problem at hand, this is a value of \( \mu = 2 \), but let’s not plug that in now, instead writing
\begin{equation}\label{eqn:collisionproblem:100}
\begin{aligned}
\mu v^2 &= \mu v_1^2 + v_2^2 \\
\mu \lr{ v – v_1 } &= v_2,
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:collisionproblem:120}
v^2 = v_1^2 + \mu \lr{ v – v_1 }^2,
\end{equation}
or
\begin{equation}\label{eqn:collisionproblem:140}
v_1^2 \lr{ 1 + \mu } – 2 \mu v v_1 = v^2 \lr{ 1 – \mu }.
\end{equation}
Completing the square gives

\begin{equation}\label{eqn:collisionproblem:160}
\lr{ v_1 – \frac{\mu}{1 + \mu} v }^2 = \frac{\mu^2}{(1 + \mu)^2} v^2 + v^2 \frac{ 1 – \mu }{1 + \mu},
\end{equation}
or
\begin{equation}\label{eqn:collisionproblem:180}
\begin{aligned}
\frac{v_1}{v}
&= \frac{\mu}{1 + \mu} \pm \inv{1 + \mu} \sqrt{ \mu^2 + 1 – \mu^2 } \\
&= \frac{\mu \pm 1}{1 + \mu}.
\end{aligned}
\end{equation}
Our second velocity, relative to the initial, is
\begin{equation}\label{eqn:collisionproblem:200}
\begin{aligned}
\frac{v_2}{v}
&= \mu \lr{ 1 – \frac{v_1}{v} } \\
&= \mu \lr{ 1 – \frac{\mu \pm 1}{1 + \mu} } \\
&= \mu \frac{ 1 + \mu – \mu \mp 1 }{1 + \mu} \\
&= \mu \frac{ 1 \mp 1 }{1 + \mu}.
\end{aligned}
\end{equation}

The post collision velocities are
\begin{equation}\label{eqn:collisionproblem:220}
\begin{aligned}
v_1 &= \frac{\mu \pm 1}{1 + \mu} v \\
v_2 &= \mu v \frac{ 1 \mp 1 }{1 + \mu},
\end{aligned}
\end{equation}
but we see the equations describe one scenario that doesn’t make sense physically, because the positive case, describes the first mass teleporting through and past the second mass, and continuing merrily on its way with its initial velocity. That means that our final solution is
\begin{equation}\label{eqn:collisionproblem:240}
\begin{aligned}
v_1 &= \frac{\mu – 1}{1 + \mu} v \\
v_2 &= 2 \frac{ \mu }{1 + \mu} v,
\end{aligned}
\end{equation}
For the original problem, that is \( v_1 = 2 v / 3 \) and \( v_2 = 4 v /3 \), where \( v = \sqrt{ 2(9.8) 1.5 } \,\textrm{m/s} \).

For the post-collision heights part of the question, we have
\begin{equation}\label{eqn:collisionproblem:260}
\begin{aligned}
\inv{2} m_1 \lr{ \frac{2 v}{3} }^2 &= m_1 g h_1 \\
\inv{2} m_2 \lr{ \frac{4 v}{3} }^2 &= m_2 g h_1,
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:collisionproblem:280}
\begin{aligned}
h_1 &= \frac{2}{9} \frac{v^2}{g} = \frac{4}{9} h \\
h_2 &= \frac{8}{9} \frac{v^2}{g} = \frac{16}{9} h,
\end{aligned}
\end{equation}
where \( h = 1.5 \,\textrm{m} \).

The original question doesn’t ask for the second, or Nth, collision. That would be a bit more fun to try.

References

[1] Bruni, Dick, Speijer, and Stewart. Physics 12, University Preparation. Nelson, 2012.

More on time derivatives of integrals.

June 9, 2024 math and physics play , , , , , , , , , , , , ,

[Click here for a PDF version of this post]

Motivation.

I was asked about geometric algebra equivalents for a couple identities found in [1], one for line integrals
\begin{equation}\label{eqn:more_feynmans_trick:20}
\ddt{} \int_{C(t)} \Bf \cdot d\Bx =
\int_{C(t)} \lr{
\PD{t}{\Bf} + \spacegrad \lr{ \Bv \cdot \Bf } – \Bv \cross \lr{ \spacegrad \cross \Bf }
}
\cdot d\Bx,
\end{equation}
and one for area integrals
\begin{equation}\label{eqn:more_feynmans_trick:40}
\ddt{} \int_{S(t)} \Bf \cdot d\BA =
\int_{S(t)} \lr{
\PD{t}{\Bf} + \Bv \lr{ \spacegrad \cdot \Bf } – \spacegrad \cross \lr{ \Bv \cross \Bf }
}
\cdot d\BA.
\end{equation}

Both of these look questionable at first glance, because neither has boundary term. However, they can be transformed with Stokes theorem to
\begin{equation}\label{eqn:more_feynmans_trick:60}
\ddt{} \int_{C(t)} \Bf \cdot d\Bx
=
\int_{C(t)} \lr{
\PD{t}{\Bf} – \Bv \cross \lr{ \spacegrad \cross \Bf }
}
\cdot d\Bx
+
\evalbar{\Bv \cdot \Bf }{\Delta C},
\end{equation}
and
\begin{equation}\label{eqn:more_feynmans_trick:80}
\ddt{} \int_{S(t)} \Bf \cdot d\BA =
\int_{S(t)} \lr{
\PD{t}{\Bf} + \Bv \lr{ \spacegrad \cdot \Bf }
}
\cdot d\BA

\oint_{\partial S(t)} \lr{ \Bv \cross \Bf } \cdot d\Bx.
\end{equation}
The area integral derivative is now seen to be a variation of one of the special cases of the Leibniz integral rule, see for example [2]. The author admits that the line integral relationship is not well used, and doesn’t show up in the wikipedia page.

My end goal will be to evaluate the derivative of a general multivector line integral
\begin{equation}\label{eqn:more_feynmans_trick:100}
\ddt{} \int_{C(t)} F d\Bx G,
\end{equation}
and area integral
\begin{equation}\label{eqn:more_feynmans_trick:120}
\ddt{} \int_{S(t)} F d^2\Bx G.
\end{equation}
We’ve derived that line integral result in a different fashion previously, but it’s interesting to see a different approach. Perhaps this approach will lend itself nicely to non-scalar integrands?

Prerequisites.

Definition 1.1: Convective derivative.

The convective derivative,
of \( \phi(t, \Bx(t)) \) is defined as
\begin{equation*}
\frac{D \phi}{D t} = \lim_{\Delta t \rightarrow 0} \frac{ \phi(t + \Delta t, \Bx + \Delta t \Bv) – \phi(t, \Bx)}{\Delta t},
\end{equation*}
where \( \Bv = d\Bx/dt \).

Theorem 1.1: Convective derivative.

The convective derivative operator may be written
\begin{equation*}
\frac{D}{D t} = \PD{t}{} + \Bv \cdot \spacegrad.
\end{equation*}

Start proof:

Let’s write
\begin{equation}\label{eqn:more_feynmans_trick:140}
\begin{aligned}
v_0 &= 1 \\
u_0 &= t + v_0 h \\
u_k &= x_k + v_k h, k \in [1,3] \\
\end{aligned}
\end{equation}

The limit, if it exists, must equal the sum of the individual limits
\begin{equation}\label{eqn:more_feynmans_trick:160}
\frac{D \phi}{D t} = \sum_{\alpha = 0}^3 \lim_{\Delta t \rightarrow 0} \frac{ \phi(u_\alpha + v_\alpha h) – \phi(t, Bx)}{h},
\end{equation}
but that is just a sum of derivitives, which can be evaluated by chain rule
\begin{equation}\label{eqn:more_feynmans_trick:180}
\begin{aligned}
\frac{D \phi}{D t}
&= \sum_{\alpha = 0}^{3} \evalbar{ \PD{u_\alpha}{\phi(u_\alpha)} \PD{h}{u_\alpha} }{h = 0} \\
&= \PD{t}{\phi} + \sum_{k = 1}^3 v_k \PD{x_k}{\phi} \\
&= \lr{ \PD{t}{} + \Bv \cdot \spacegrad } \phi.
\end{aligned}
\end{equation}

End proof.

Definition 1.2: Hestenes overdot notation.

We may use a dot or a tick with a derivative operator, to designate the scope of that operator, allowing it to operate bidirectionally, or in a restricted fashion, holding specific multivector elements constant. This is called the Hestenes overdot notation.Illustrating by example, with multivectors \( F, G \), and allowing the gradient to act bidirectionally, we have
\begin{equation*}
\begin{aligned}
F \spacegrad G
&=
\dot{F} \dot{\spacegrad} G
+
F \dot{\spacegrad} \dot{G} \\
&=
\sum_i \lr{ \partial_i F } \Be_i G + \sum_i F \Be_i \lr{ \partial_i G }.
\end{aligned}
\end{equation*}
The last step is a precise statement of the meaning of the overdot notation, showing that we hold the position of the vector elements of the gradient constant, while the (scalar) partials are allowed to commute, acting on the designated elements.

We will need one additional identity

Lemma 1.1: Gradient of dot product (one constant vector.)

Given vectors \( \Ba, \Bb \) the gradient of their dot product is given by
\begin{equation*}
\spacegrad \lr{ \Ba \cdot \Bb }
= \lr{ \Bb \cdot \spacegrad } \Ba – \Bb \cdot \lr{ \spacegrad \wedge \Ba }
+ \lr{ \Ba \cdot \spacegrad } \Bb – \Ba \cdot \lr{ \spacegrad \wedge \Bb }.
\end{equation*}
If \( \Bb \) is constant, this reduces to
\begin{equation*}
\spacegrad \lr{ \Ba \cdot \Bb }
=
\dot{\spacegrad} \lr{ \dot{\Ba} \cdot \Bb }
= \lr{ \Bb \cdot \spacegrad } \Ba – \Bb \cdot \lr{ \spacegrad \wedge \Ba }.
\end{equation*}

Start proof:

The \( \Bb \) constant case is trivial to prove. We use \( \Ba \cdot \lr{ \Bb \wedge \Bc } = \lr{ \Ba \cdot \Bb} \Bc – \Bb \lr{ \Ba \cdot \Bc } \), and simply expand the vector, curl dot product
\begin{equation}\label{eqn:more_feynmans_trick:200}
\Bb \cdot \lr{ \spacegrad \wedge \Ba }
=
\Bb \cdot \lr{ \dot{\spacegrad} \wedge \dot{\Ba} }
= \lr{ \Bb \cdot \dot{\spacegrad} } \dot{\Ba} – \dot{\spacegrad} \lr{ \dot{\Ba} \cdot \Bb }. \end{equation}
Rearrangement proves that \( \Bb \) constant identity. The more general statement follows from a chain rule evaluation of the gradient, holding each vector constant in turn
\begin{equation}\label{eqn:more_feynmans_trick:320}
\spacegrad \lr{ \Ba \cdot \Bb }
=
\dot{\spacegrad} \lr{ \dot{\Ba} \cdot \Bb }
+
\dot{\spacegrad} \lr{ \dot{\Bb} \cdot \Ba }.
\end{equation}

End proof.

Time derivative of a line integral of a vector field.

We now have all our tools assembled, and can proceed to evaluate the derivative of the line integral. We want to show that

Theorem 1.2:

Given a path parameterized by \( \Bx(\lambda) \), where \( d\Bx = (\PDi{\lambda}{\Bx}) d\lambda \), with points along a \( C(t) \) moving through space at a velocity \( \Bv(\Bx(\lambda)) \), and a vector function \( \Bf = \Bf(t, \Bx(\lambda)) \),
\begin{equation*}
\ddt{} \int_{C(t)} \Bf \cdot d\Bx =
\int_{C(t)} \lr{
\PD{t}{\Bf} + \spacegrad \lr{ \Bf \cdot \Bv } + \Bv \cdot \lr{ \spacegrad \wedge \Bf}
} \cdot d\Bx
\end{equation*}

Start proof:

I’m going to avoid thinking about the rigorous details, like any requirements for curve continuity and smoothness. We will however, specify that the end points are given by \( [\lambda_1, \lambda_2] \). Expanding out the parameterization, we seek to evaluate
\begin{equation}\label{eqn:more_feynmans_trick:240}
\int_{C(t)} \Bf \cdot d\Bx
=
\int_{\lambda_1}^{\lambda_2} \Bf(t, \Bx(\lambda) ) \cdot \frac{\partial \Bx}{\partial \lambda} d\lambda.
\end{equation}
The parametric form nicely moves all the boundary time dependence into the integrand, allowing us to write
\begin{equation}\label{eqn:more_feynmans_trick:260}
\begin{aligned}
\ddt{} \int_{C(t)} \Bf \cdot d\Bx
&=
\lim_{\Delta t \rightarrow 0}
\inv{\Delta t}
\int_{\lambda_1}^{\lambda_2}
\lr{ \Bf(t + \Delta t, \Bx(\lambda) + \Delta t \Bv(\Bx(\lambda) ) \cdot \frac{\partial}{\partial \lambda} \lr{ \Bx + \Delta t \Bv(\Bx(\lambda)) } – \Bf(t, \Bx(\lambda)) \cdot \frac{\partial \Bx}{\partial \lambda} } d\lambda \\
&=
\lim_{\Delta t \rightarrow 0}
\inv{\Delta t}
\int_{\lambda_1}^{\lambda_2}
\lr{ \Bf(t + \Delta t, \Bx(\lambda) + \Delta t \Bv(\Bx(\lambda) ) – \Bf(t, \Bx)} \cdot \frac{\partial \Bx}{\partial \lambda} d\lambda \\
&\quad+
\lim_{\Delta t \rightarrow 0}
\int_{\lambda_1}^{\lambda_2}
\Bf(t + \Delta t, \Bx(\lambda) + \Delta t \Bv(\Bx(\lambda) )) \cdot \PD{\lambda}{}\Bv(\Bx(\lambda)) d\lambda \\
&=
\int_{\lambda_1}^{\lambda_2}
\frac{D \Bf}{Dt} \cdot \frac{\partial \Bx}{\partial \lambda} d\lambda +
\lim_{\Delta t \rightarrow 0}
\int_{\lambda_1}^{\lambda_2}
\Bf(t + \Delta t, \Bx(\lambda) + \Delta t \Bv(\Bx(\lambda) \cdot \frac{\partial}{\partial \lambda} \Bv(\Bx(\lambda)) d\lambda \\
&=
\int_{\lambda_1}^{\lambda_2}
\lr{ \PD{t}{\Bf} + \lr{ \Bv \cdot \spacegrad } \Bf } \cdot \frac{\partial \Bx}{\partial \lambda} d\lambda
+
\int_{\lambda_1}^{\lambda_2}
\Bf \cdot \frac{\partial \Bv}{\partial \lambda} d\lambda
\end{aligned}
\end{equation}
At this point, we have a \( d\Bx \) in the first integrand, and a \( d\Bv \) in the second. We can expand the second integrand, evaluating the derivative using chain rule to find
\begin{equation}\label{eqn:more_feynmans_trick:280}
\begin{aligned}
\Bf \cdot \PD{\lambda}{\Bv}
&=
\sum_i \Bf \cdot \PD{x_i}{\Bv} \PD{\lambda}{x_i} \\
&=
\sum_{i,j} f_j \PD{x_i}{v_j} \PD{\lambda}{x_i} \\
&=
\sum_{j} f_j \lr{ \spacegrad v_j } \cdot \PD{\lambda}{\Bx} \\
&=
\sum_{j} \lr{ \dot{\spacegrad} f_j \dot{v_j} } \cdot \PD{\lambda}{\Bx} \\
&=
\dot{\spacegrad} \lr{ \Bf \cdot \dot{\Bv} } \cdot \PD{\lambda}{\Bx}.
\end{aligned}
\end{equation}
Substitution gives
\begin{equation}\label{eqn:more_feynmans_trick:300}
\begin{aligned}
\ddt{} \int_{C(t)} \Bf \cdot d\Bx
&=
\int_{C(t)}
\lr{ \PD{t}{\Bf} + \lr{ \Bv \cdot \spacegrad } \Bf + \dot{\spacegrad} \lr{ \Bf \cdot \dot{\Bv} } } \cdot \frac{\partial \Bx}{\partial \lambda} d\lambda \\
&=
\int_{C(t)}
\lr{ \PD{t}{\Bf}
+ \spacegrad \lr{ \Bf \cdot \Bv }
+ \lr{ \Bv \cdot \spacegrad } \Bf
– \dot{\spacegrad} \lr{ \dot{\Bf} \cdot \Bv }
} \cdot d\Bx \\
&=
\int_{C(t)}
\lr{ \PD{t}{\Bf}
+ \spacegrad \lr{ \Bf \cdot \Bv }
+ \Bv \cdot \lr{ \spacegrad \wedge \Bf }
} \cdot d\Bx,
\end{aligned}
\end{equation}
where the last simplification utilizes lemma 1.1.

End proof.

Since \( \Ba \cdot \lr{ \Bb \wedge \Bc } = -\Ba \cross \lr{ \Bb \cross \Bc } \), observe that we have also recovered \ref{eqn:more_feynmans_trick:20}.

Time derivative of a line integral of a bivector field.

For a bivector line integral, we have

Theorem 1.3:

Given a path parameterized by \( \Bx(\lambda) \), where \( d\Bx = (\PDi{\lambda}{\Bx}) d\lambda \), with points along a \( C(t) \) moving through space at a velocity \( \Bv(\Bx(\lambda)) \), and a bivector function \( B = B(t, \Bx(\lambda)) \),
\begin{equation*}
\ddt{} \int_{C(t)} B \cdot d\Bx =
\int_{C(t)}
\PD{t}{B} \cdot d\Bx + \lr{ d\Bx \cdot \spacegrad } \lr{ B \cdot \Bv } + \lr{ \lr{ \Bv \wedge d\Bx } \cdot \spacegrad } \cdot B.
\end{equation*}

Start proof:

Skipping the steps that follow our previous proceedure exactly, we have
\begin{equation}\label{eqn:more_feynmans_trick:340}
\ddt{} \int_{C(t)} B \cdot d\Bx =
\int_{C(t)}
\PD{t}{B} \cdot d\Bx + \lr{ \Bv \cdot \spacegrad } B \cdot d\Bx + B \cdot d\Bv.
\end{equation}
Since
\begin{equation}\label{eqn:more_feynmans_trick:360}
\begin{aligned}
B \cdot d\Bv
&= B \cdot \PD{\lambda}{\Bv} d\lambda \\
&= B \cdot \PD{x_i}{\Bv} \PD{\lambda}{x_i} d\lambda \\
&= B \cdot \lr{ \lr{ d\Bx \cdot \spacegrad } \Bv },
\end{aligned}
\end{equation}
we have
\begin{equation}\label{eqn:more_feynmans_trick:380}
\ddt{} \int_{C(t)} B \cdot d\Bx
=
\int_{C(t)}
\PD{t}{B} \cdot d\Bx + \lr{ \Bv \cdot \spacegrad } B \cdot d\Bx + B \cdot \lr{ \lr{ d\Bx \cdot \spacegrad } \Bv } \\
\end{equation}
Let’s reduce the two last terms in this integrand
\begin{equation}\label{eqn:more_feynmans_trick:400}
\begin{aligned}
\lr{ \Bv \cdot \spacegrad } B \cdot d\Bx + B \cdot \lr{ \lr{ d\Bx \cdot \spacegrad } \Bv }
&=
\lr{ \Bv \cdot \spacegrad } B \cdot d\Bx –
\lr{ d\Bx \cdot \dot{\spacegrad} } \lr{ \dot{\Bv} \cdot B } \\
&=
\lr{ \Bv \cdot \spacegrad } B \cdot d\Bx
– \lr{ d\Bx \cdot \spacegrad} \lr{ \Bv \cdot B }
+ \lr{ d\Bx \cdot \dot{\spacegrad} } \lr{ \Bv \cdot \dot{B} } \\
&=
\lr{ d\Bx \cdot \spacegrad} \lr{ B \cdot \Bv }
+ \lr{ \Bv \cdot \dot{\spacegrad} } \dot{B} \cdot d\Bx
+ \lr{ d\Bx \cdot \dot{\spacegrad} } \lr{ \Bv \cdot \dot{B} } \\
&=
\lr{ d\Bx \cdot \spacegrad} \lr{ B \cdot \Bv }
+ \lr{ \Bv \lr{ d\Bx \cdot \spacegrad } – d\Bx \lr{ \Bv \cdot \spacegrad } } \cdot B \\
&=
\lr{ d\Bx \cdot \spacegrad} \lr{ B \cdot \Bv }
+ \lr{ \lr{ \Bv \wedge d\Bx } \cdot \spacegrad } \cdot B.
\end{aligned}
\end{equation}
Back substitution finishes the job.

End proof.

Time derivative of a multivector line integral.

Theorem 1.4: Time derivative of multivector line integral.

Given a path parameterized by \( \Bx(\lambda) \), where \( d\Bx = (\PDi{\lambda}{\Bx}) d\lambda \), with points along a \( C(t) \) moving through space at a velocity \( \Bv(\Bx(\lambda)) \), and multivector functions \( M = M(t, \Bx(\lambda)), N = N(t, \Bx(\lambda)) \),
\begin{equation*}
\ddt{} \int_{C(t)} M d\Bx N =
\int_{C(t)}
\frac{D}{D t} M d\Bx N + M \lr{ \lr{ d\Bx \cdot \dot{\spacegrad} } \dot{\Bv} } N.
\end{equation*}

It is useful to write this out explicitly for clarity
\begin{equation}\label{eqn:more_feynmans_trick:420}
\ddt{} \int_{C(t)} M d\Bx N =
\int_{C(t)}
\PD{t}{M} d\Bx N + M d\Bx \PD{t}{N}
+ \dot{M} \lr{ \Bv \cdot \dot{\spacegrad} } N
+ M \lr{ \Bv \cdot \dot{\spacegrad} } \dot{N}
+ M \lr{ \lr{ d\Bx \cdot \dot{\spacegrad} } \dot{\Bv} } N.
\end{equation}

Proof is left to the reader, but follows the patterns above.

It’s not obvious whether there is a nice way to reduce this, as we did for the scalar valued line integral of a vector function, and the vector valued line integral of a bivector function. In particular, our vector and bivector results had \( \spacegrad \lr{ \Bf \cdot \Bv } \), and \( \spacegrad \lr{ B \cdot \Bv } \) terms respectively, which allows for the boundary term to be evaluated using Stokes’ theorem. Is such a manipulation possible here?

Coming later: surface integrals!

References

[1] Nicholas Kemmer. Vector Analysis: A physicist’s guide to the mathematics of fields in three dimensions. CUP Archive, 1977.

[2] Wikipedia contributors. Leibniz integral rule — Wikipedia, the free encyclopedia. https://en.wikipedia.org/w/index.php?title=Leibniz_integral_rule&oldid=1223666713, 2024. [Online; accessed 22-May-2024].