conditioning number

Final notes for ECE1254, Modelling of Multiphysics Systems

December 27, 2014 ece1254 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Capture

I’ve now finished my first grad course, Modelling of Multiphysics Systems, taught by Prof Piero Triverio.

I’ve posted notes for lectures and other material as I was taking the course, but now have an aggregated set of notes for the whole course posted.
This is now updated with all my notes from the lectures, solved problems, additional notes on auxillary topics I wanted to explore (like SVD), plus the notes from the Harmonic Balance report that Mike and I will be presenting in January.

This version of my notes also includes all the matlab figures regenerating using http://www.mathworks.com/matlabcentral/fileexchange/23629-export-fig, which allows a save-as pdf, which rescales much better than Matlab saveas() png’s when embedded in latex.  I’m not sure if that’s the best way to include Matlab figures in latex, but they are at least not fuzzy looking now.

All in all, I’m pretty pleased with my notes for this course.  They are a lot more readable than any of the ones I’ve done for the physics undergrad courses I was taking (https://peeterjoot.com/writing/).  While there was quite a lot covered in this course, the material really only requires an introductory circuits course and some basic math (linear algebra and intro calculus), so is pretty accessible.

This was a fun course.  I recall, back in ancient times when I was a first year student, being unsatisfied with all the ad-hoc strategies we used to solve circuits problems.  This finally answers the questions of how to tackle things more systematically.

Here’s the contents outline for these notes:

Preface
Lecture notes
1 nodal analysis
1.1 In slides
1.2 Mechanical structures example
1.3 Assembling system equations automatically. Node/branch method
1.4 Nodal Analysis
1.5 Modified nodal analysis (MNA)
2 solving large systems
2.1 Gaussian elimination
2.2 LU decomposition
2.3 Problems
3 numerical errors and conditioning
3.1 Strict diagonal dominance
3.2 Exploring uniqueness and existence
3.3 Perturbation and norms
3.4 Matrix norm
4 singular value decomposition, and conditioning number
4.1 Singular value decomposition
4.2 Conditioning number
5 sparse factorization
5.1 Fill ins
5.2 Markowitz product
5.3 Markowitz reordering
5.4 Graph representation
6 gradient methods
6.1 Summary of factorization costs
6.2 Iterative methods
6.3 Gradient method
6.4 Recap: Summary of Gradient method
6.5 Conjugate gradient method
6.6 Full Algorithm
6.7 Order analysis
6.8 Conjugate gradient convergence
6.9 Gershgorin circle theorem
6.10 Preconditioning
6.11 Symmetric preconditioning
6.12 Preconditioned conjugate gradient
6.13 Problems
7 solution of nonlinear systems
7.1 Nonlinear systems
7.2 Richardson and Linear Convergence
7.3 Newton’s method
7.4 Solution of N nonlinear equations in N unknowns
7.5 Multivariable Newton’s iteration
7.6 Automatic assembly of equations for nonlinear system
7.7 Damped Newton’s method
7.8 Continuation parameters
7.9 Singular Jacobians
7.10 Struts and Joints, Node branch formulation
7.11 Problems
8 time dependent systems
8.1 Assembling equations automatically for dynamical systems
8.2 Numerical solution of differential equations
8.3 Forward Euler method
8.4 Backward Euler method
8.5 Trapezoidal rule (TR)
8.6 Nonlinear differential equations
8.7 Analysis, accuracy and stability (Dt ! 0)
8.8 Residual for LMS methods
8.9 Global error estimate
8.10 Stability
8.11 Stability (continued)
8.12 Problems
9 model order reduction
9.1 Model order reduction
9.2 Moment matching
9.3 Model order reduction (cont).
9.4 Moment matching
9.5 Truncated Balanced Realization (1000 ft overview)
9.6 Problems
Final report
10 harmonic balance
10.1 Abstract
10.2 Introduction
10.2.1 Modifications to the netlist syntax
10.3 Background
10.3.1 Discrete Fourier Transform
10.3.2 Harmonic Balance equations
10.3.3 Frequency domain representation of MNA equations
10.3.4 Example. RC circuit with a diode.
10.3.5 Jacobian
10.3.6 Newton’s method solution
10.3.7 Alternative handling of the non-linear currents and Jacobians
10.4 Results
10.4.1 Low pass filter
10.4.2 Half wave rectifier
10.4.3 AC to DC conversion
10.4.4 Bridge rectifier
10.4.5 Cpu time and error vs N
10.4.6 Taylor series non-linearities
10.4.7 Stiff systems
10.5 Conclusion
10.6 Appendices
10.6.1 Discrete Fourier Transform inversion
Appendices
a singular value decomposition
b basic theorems and definitions
c norton equivalents
d stability of discretized linear differential equations
e laplace transform refresher
f discrete fourier transform
g harmonic balance, rough notes
g.1 Block matrix form, with physical parameter ordering
g.2 Block matrix form, with frequency ordering
g.3 Representing the linear sources
g.4 Representing non-linear sources
g.5 Newton’s method
g.6 A matrix formulation of Harmonic Balance non-linear currents
h matlab notebooks
i mathematica notebooks
Index
Bibliography

ECE1254H Modeling of Multiphysics Systems. Lecture 6: Singular value decomposition, and conditioning number. Taught by Prof. Piero Triverio

October 1, 2014 ece1254 , , , , ,

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

Matrix norm

We’ve defined the matrix norm of \( M \), for the system \( \overline{{y}} = M \overline{{x}} \) as

\begin{equation}\label{eqn:multiphysicsL6:21}
\Norm{M} = \max_{\Norm{\overline{{x}}} = 1} \Norm{ M \overline{{x}} }.
\end{equation}

We will typically use the \( L_2 \) norm, so that the matrix norm is
\begin{equation}\label{eqn:multiphysicsL6:41}
\Norm{M}_2 = \max_{\Norm{\overline{{x}}}_2 = 1} \Norm{ M \overline{{x}} }_2.
\end{equation}

It can be shown that

\begin{equation}\label{eqn:multiphysicsL6:61}
\Norm{M}_2 = \max_i \sigma_i(M),
\end{equation}

where \( \sigma_i(M) \) are the (SVD) singular values.

Singular value decomposition (SVD)

Given \( M \in R^{m \times n} \), we can find a representation of \( M \)

\begin{equation}\label{eqn:multiphysicsL6:81}
M = U \Sigma V^\T,
\end{equation}

where \( U \) and \( V\) are orthogonal matrices such that \( U^\T U = 1 \), and \( V^\T V = 1 \), and

\begin{equation}\label{eqn:multiphysicsL6:101}
\Sigma =
\begin{bmatrix}
\sigma_1 & & & & & &\\
& \sigma_2 & & & & &\\
& & \ddots & & & &\\
& & & \sigma_r & & &\\
& & & & 0 & & \\
& & & & & \ddots & \\
& & & & & & 0 \\
\end{bmatrix}
\end{equation}

The values \( \sigma_i, i \le \min(n,m) \) are called the singular values of \( M \). The singular values are subject to the ordering

\begin{equation}\label{eqn:multiphysicsL6:261}
\sigma_{1} \ge \sigma_{2} \ge \cdots \ge 0
\end{equation}

If \(r\) is the rank of \( M \), then the \( \sigma_r \) above is the minimum non-zero singular value (but the zeros are also called singular values).

Observe that the condition \( U^\T U = 1 \) is a statement of orthonormality. In terms of column vectors \( \overline{{u}}_i \), such a product written out explicitly is

\begin{equation}\label{eqn:multiphysicsL6:301}
\begin{bmatrix}
\overline{{u}}_1^\T \\ \overline{{u}}_2^\T \\ \vdots \\ \overline{{u}}_m^\T
\end{bmatrix}
\begin{bmatrix}
\overline{{u}}_1 & \overline{{u}}_2 & \cdots & \overline{{u}}_m
\end{bmatrix}
=
\begin{bmatrix}
1 & & & \\
& 1 & & \\
& & \ddots & \\
& & & 1
\end{bmatrix}.
\end{equation}

This is both normality \( \overline{{u}}_i^\T \overline{{u}}_i = 1 \), and orthonormality \( \overline{{u}}_i^\T \overline{{u}}_j = 1, i \ne j \).

Example: 2 x 2 case

(for column vectors \( \overline{{u}}_i, \overline{{v}}_j \)).

\begin{equation}\label{eqn:multiphysicsL6:281}
M =
\begin{bmatrix}
\overline{{u}}_1 & \overline{{u}}_2
\end{bmatrix}
\begin{bmatrix}
\sigma_1 & \\
& \sigma_2
\end{bmatrix}
\begin{bmatrix}
\overline{{v}}_1^\T \\
\overline{{v}}_2^\T
\end{bmatrix}
\end{equation}

Consider \( \overline{{y}} = M \overline{{x}} \), and take an \( \overline{{x}} \) with \( \Norm{\overline{{x}}}_2 = 1 \)

Note: I’ve chosen not to sketch what was drawn on the board. See instead the animated gif of the same in \citep{wiki:svd}.

A very nice video treatment of SVD by Prof Gilbert Strang can be found in \citep{ocw:svd}.

Conditioning number

Given a perturbation of \( M \overline{{x}} = \overline{{b}} \) to

\begin{equation}\label{eqn:multiphysicsL6:121}
\lr{ M + \delta M }
\lr{ \overline{{x}} + \delta \overline{{x}} } = \overline{{b}},
\end{equation}

or
\begin{equation}\label{eqn:multiphysicsL6:141}
\underbrace{ M \overline{{x}} – \overline{{b}} }_{=0} + \delta M \overline{{x}} + M \delta \overline{{x}} + \delta M \delta \overline{{x}} = 0.
\end{equation}

This gives

\begin{equation}\label{eqn:multiphysicsL6:161}
M \delta \overline{{x}} = – \delta M \overline{{x}} – \delta M \delta \overline{{x}},
\end{equation}

or
\begin{equation}\label{eqn:multiphysicsL6:181}
\delta \overline{{x}} = – M^{-1} \delta M \lr{ \overline{{x}} + \delta \overline{{x}} }.
\end{equation}

Taking norms

\begin{equation}\label{eqn:multiphysicsL6:201}
\Norm{ \delta \overline{{x}}}_2 = \Norm{
M^{-1} \delta M \lr{ \overline{{x}} + \delta \overline{{x}} }
}_2
\le
\Norm{ M^{-1} }_2 \Norm{ \delta M }_2 \Norm{ \overline{{x}} + \delta \overline{{x}} }_2,
\end{equation}

or
\begin{equation}\label{eqn:multiphysicsL6:221}
\underbrace{ \frac{\Norm{ \delta \overline{{x}}}_2 }{ \Norm{ \overline{{x}} + \delta \overline{{x}}}_2 } }_{\text{relative error of solution}}
\le
\underbrace{ \Norm{M}_2 \Norm{M^{-1}}_2 }_{\text{conditioning number of \(M\)}}
\underbrace{ \frac{ \Norm{ \delta M}_2 } {\Norm{M}_2} }_{\text{relative perturbation of \( M \)}}.
\end{equation}

The conditioning number can be shown to be

\begin{equation}\label{eqn:multiphysicsL6:241}
\text{cond}(M) =
\frac
{\sigma_{\mathrm{max}}}
{\sigma_{\mathrm{min}}}
\ge 1
\end{equation}

FIXME: justify.

sensitivity to conditioning number

Double precision relative rounding errors can be of the order \( 10^{-16} \sim 2^{-54} \), which allows us to gauge the relative error of the solution

 

Capture