[Click here for a PDF of this post with nicer formatting]
In the class notes and the text [1] the Von Neumann entropy is defined as
\begin{equation}\label{eqn:densityMatrixEntropy:20}
S = -\textrm{Tr} \rho \ln \rho.
\end{equation}
In one of our problems I had trouble evaluating this, having calculated a density operator matrix representation
\begin{equation}\label{eqn:densityMatrixEntropy:40}
\rho = E \wedge E^{-1},
\end{equation}
where
\begin{equation}\label{eqn:densityMatrixEntropy:60}
E = \inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix},
\end{equation}
and
\begin{equation}\label{eqn:densityMatrixEntropy:100}
\wedge =
\begin{bmatrix}
1 & 0 \\
0 & 0
\end{bmatrix}.
\end{equation}
The usual method of evaluating a function of a matrix is to assume the function has a power series representation, and that a similarity transformation of the form \( A = E \wedge E^{-1} \) is possible, so that
\begin{equation}\label{eqn:densityMatrixEntropy:80}
f(A) = E f(\wedge) E^{-1},
\end{equation}
however, when attempting to do this with the matrix of \ref{eqn:densityMatrixEntropy:40} leads to an undesirable result
\begin{equation}\label{eqn:densityMatrixEntropy:120}
\ln \rho =
\inv{2}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}
\begin{bmatrix}
\ln 1 & 0 \\
0 & \ln 0
\end{bmatrix}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}.
\end{equation}
The \( \ln 0 \) makes the evaluation of this matrix logarithm rather unpleasant. To give meaning to the entropy expression, we have to do two things, the first is treating the trace operation as a higher precedence than the logarithms that it contains. That is
\begin{equation}\label{eqn:densityMatrixEntropy:140}
\begin{aligned}
-\textrm{Tr} ( \rho \ln \rho )
&=
-\textrm{Tr} ( E \wedge E^{-1} E \ln \wedge E^{-1} ) \\
&=
-\textrm{Tr} ( E \wedge \ln \wedge E^{-1} ) \\
&=
-\textrm{Tr} ( E^{-1} E \wedge \ln \wedge ) \\
&=
-\textrm{Tr} ( \wedge \ln \wedge ) \\
&=
– \sum_k \wedge_{kk} \ln \wedge_{kk}.
\end{aligned}
\end{equation}
Now the matrix of the logarithm need not be evaluated, but we still need to give meaning to \( \wedge_{kk} \ln \wedge_{kk} \) for zero diagonal entries. This can be done by considering a limiting scenerio
\begin{equation}\label{eqn:densityMatrixEntropy:160}
\begin{aligned}
-\lim_{a \rightarrow 0} a \ln a
&=
-\lim_{x \rightarrow \infty} e^{-x} \ln e^{-x} \\
&=
\lim_{x \rightarrow \infty} x e^{-x} \\
&=
0.
\end{aligned}
\end{equation}
The entropy can now be expressed in the unambiguous form, summing over all the non-zero eigenvalues of the density operator
\begin{equation}\label{eqn:densityMatrixEntropy:180}
\boxed{
S = – \sum_{ \wedge_{kk} \ne 0} \wedge_{kk} \ln \wedge_{kk}.
}
\end{equation}
References
[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.