## ECE1254H Modeling of Multiphysics Systems. Lecture 13: Continuation parameters and Simulation of dynamical systems. Taught by Prof. Piero Triverio

October 29, 2014 ece1254 , ,

## Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

## Singular Jacobians

(mostly on slides)

There is the possiblity of singular Jacobians to consider. FIXME: not sure how this system represented that. Look on slides.

fig. 1. Diode system that results in singular Jacobian

\label{eqn:multiphysicsL13:20}
\tilde{f}(v(\lambda), \lambda) = i(v) – \inv{R}( v – \lambda V_s ) = 0.

An alternate continuation scheme uses

\label{eqn:multiphysicsL13:40}
\tilde{F}(\Bx(\lambda), \lambda) = \lambda F(\Bx(\lambda)) + (1-\lambda) \Bx(\lambda).

This scheme has

\label{eqn:multiphysicsL13:60}
\tilde{F}(\Bx(0), 0) = 0

\label{eqn:multiphysicsL13:80}
\tilde{F}(\Bx(1), 1) = F(\Bx(1)),

and for one variable, easy to compute Jacobian at the origin, or the original Jacobian at $$\lambda = 1$$

\label{eqn:multiphysicsL13:100}
\PD{x}{\tilde{F}}(x(0), 0) = I

\label{eqn:multiphysicsL13:120}
\PD{x}{\tilde{F}}(x(1), 1) = \PD{x}{F}(x(1))

## Simulation of dynamical systems

Example high level system in fig. 2.

fig. 2. Complex time dependent system

## Assembling equations automatically for dynamical systems

### RC circuit

To demonstrate the method by example consider the RC circuit fig. 3 which has time dependence that must be considered

fig. 3. RC circuit

The unknowns are $$v_1(t), v_2(t)$$.

The equations (KCLs) at each of the nodes are

1. $$\frac{v_1(t)}{R_1} + C_1 \frac{dv_1}{dt} + \frac{v_1(t) – v_2(t)}{R_2} + C_2 \frac{d(v_1 – v_2)}{dt} – i_{s,1}(t) = 0$$
2. $$\frac{v_2(t) – v_1(t)}{R_2} + C_2 \frac{d(v_2 – v_1)}{dt} + \frac{v_2(t)}{R_3} + C_3 \frac{dv_2}{dt} – i_{s,2}(t) = 0$$

This has the matrix form
\label{eqn:multiphysicsL13:140}
\begin{bmatrix}
Z_1 + Z_2 & – Z_2 \\
-Z_2 & Z_2 + Z_3
\end{bmatrix}
\begin{bmatrix}
v_1(t) \\
v_2(t)
\end{bmatrix}
+
\begin{bmatrix}
C_1 + C_2 & -C_2 \\
– C_2 & C_2 + C_3
\end{bmatrix}
\begin{bmatrix}
\frac{dv_1(t)}{dt} \\
\frac{dv_2(t}{dt})
\end{bmatrix}
=
\begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix}
\begin{bmatrix}
i_{s,1}(t) \\
i_{s,2}(t)
\end{bmatrix}.

Observe that the capacitor between node 2 and 1 is associated with a stamp of the form

\label{eqn:multiphysicsL13:180}
\begin{bmatrix}
C_2 & -C_2 \\
-C_2 & C_2
\end{bmatrix},

very much like the impedance stamps of the resistor node elements.

The RC circuit problem has the abstract form

\label{eqn:multiphysicsL13:160}
G \Bx(t) + C \frac{d\Bx(t)}{dt} = B \Bu(t),

which is more general than a state space equation of the form

\label{eqn:multiphysicsL13:200}
\frac{d\Bx(t)}{dt} = A \Bx(t) + B \Bu(t).

Such a system may be represented diagramatically as in fig. 4.

fig. 4. State space system

The $$C$$ factor in this capacitance system, is generally not invertable. For example, if consider a 10 node system with only one capacitor, for which $$C$$ will be mostly zeros.
In a state space system, in all equations we have a derivative. All equations are dynamical.

The time dependent MNA system for the RC circuit above, contains a mix of dynamical and algebraic equations. This could, for example, be a pair of equations like

\label{eqn:multiphysicsL13:240}
\frac{dx_1}{dt} + x_2 + 3 = 0

\label{eqn:multiphysicsL13:260}
x_1 + x_2 + 3 = 0

### How to handle inductors

A pair of nodes that contains an inductor element, as in fig. 5, has to be handled specially.

fig. 5. Inductor configuration

The KCL at node 1 has the form

\label{eqn:multiphysicsL13:280}
\cdots + i_L(t) + \cdots = 0,

where

\label{eqn:multiphysicsL13:300}
v_{n_1}(t) – v_{n_2}(t) = L \frac{d i_L}{dt}.

It is possible to express this in terms of $$i_L$$, the variable of interest

\label{eqn:multiphysicsL13:320}
i_L(t) = \inv{L} \int_0^t \lr{ v_{n_1}(\tau) – v_{n_2}(\tau) } d\tau
+ i_L(0).

Expressing the problem directly in terms of such integrals makes the problem harder to solve, since the usual differential equation toolbox cannot be used directly. An integro-differential toolbox would have to be developed. What can be done instead is to introduce an additional unknown for each inductor current derivative $$di_L/dt$$, for which an additional MNA row is introduced for that inductor scaled voltage difference.

## Numerical solution of differential equations

Consider the one variable system

\label{eqn:multiphysicsL13:340}
G x(t) + C \frac{dx}{dt} = B u(t),

given an initial condition $$x(0) = x_0$$. Imagine that this system has the solution sketched in fig. 6.

fig. 6. Discrete time sampling

Very roughly, the steps for solution are of the form

1. Discretize time
2. Aim to find the solution at $$t_1, t_2, t_3, \cdots$$
3. Use a finite difference formula to approximate the derivative.

There are various schemes that can be used to discretize, and compute the finite differences.

### Forward Euler method

\index{forward Euler method}

One such scheme is to use the forward differences, as in fig. 7, to approximate the derivative

\label{eqn:multiphysicsL13:360}
\dot{x}(t_n) \approx \frac{x_{n+1} – x_n}{\Delta t}.

fig. 7. Forward difference derivative approximation

Introducing this into \ref{eqn:multiphysicsL13:340} gives

\label{eqn:multiphysicsL13:350}
G x_n + C \frac{x_{n+1} – x_n}{\Delta t} = B u(t_n),

or

\label{eqn:multiphysicsL13:380}
C x_{n+1} = \Delta t B u(t_n) – \Delta t G x_n + C x_n.

The coefficient $$C$$ must be invertable, and the next point follows immediately

\label{eqn:multiphysicsL13:381}
x_{n+1} = \frac{\Delta t B}{C} u(t_n)
+ x_n \lr{ 1 – \frac{\Delta t G}{C} }.

## Current collection of multiphysics modeling notes (ece1254)

October 27, 2014 ece1254

I’ve now posted
v.1 of my current notes collection for “Modeling of Multiphysics Systems” (ECE1254H1F), a course I’m now taking taught by Prof. Piero Triverio. This includes the following individual lecture or personal notes, which may or may not have been posted as separate blog entries:

October 27, 2014 Struts and Joints, Node branch formulation

October 24, 2014 Conjugate gradient method

October 22, 2014 Nonlinear equations

October 22, 2014 Conjugate gradient methods

October 21, 2014 Nonlinear systems

October 21, 2014 Sparse factorization and iterative methods

October 15, 2014 Illustrating the LU algorthm with pivots by example

October 14, 2014 Modified Nodal Analysis

October 09, 2014 Numerical LU example where pivoting is required

October 09, 2014 Numeric LU factorization example

October 02, 2014 Singular Value Decomposition

October 01, 2014 Matrix norm, singular decomposition, and conditioning number

September 30, 2014 Numerical error and conditioning.

September 26, 2014 Solving large systems

September 24, 2014 Nodal Analysis

September 23, 2014 Assembling system equations automatically

September 23, 2014 Analogies to circuit systems

## Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

## Struts and Joints, Node branch formulation

Let’s consider the simple strut system of fig. 1 again.

fig. 1. Simple strut system

Our unknowns are

1. Forces at each of the points we have a force with two components
\label{eqn:multiphysicsL12:20}
\Bf_A = \lr{ f_{A,x}, f_{A,y} }
We construct a total force vector\label{eqn:multiphysicsL12:40}
\Bf =
\begin{bmatrix}
f_{A,x} \\
f_{A,y} \\
f_{B,x} \\
f_{B,y} \\
\vdots
\end{bmatrix}
2. Positions of the joints\label{eqn:multiphysicsL12:60}
\Br =
\begin{bmatrix}
x_1 \\
y_1 \\
y_1 \\
y_2 \\
\vdots
\end{bmatrix}

Our given variables are

1. The load force $$\Bf_L$$.
2. The joint positions at rest.
3. parameter of struts.

### Conservation laws

The conservation laws are

\label{eqn:multiphysicsL12:80}
\Bf_A + \Bf_B + \Bf_C = 0

\label{eqn:multiphysicsL12:100}
-\Bf_C + \Bf_D + \Bf_L = 0

which we can put in matrix form

\label{eqn:multiphysicsL12:120}
\begin{bmatrix}
1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & -1 & 0 & 1 & 0 \\
0 & 1 & 0 & 1 & 0 & -1 & 0 & 1 \\
\end{bmatrix}
\begin{bmatrix}
f_{A,x} \\
f_{A,y} \\
f_{B,x} \\
f_{B,y} \\
f_{C,x} \\
f_{C,y} \\
f_{D,x} \\
f_{D,y}
\end{bmatrix}
=
\begin{bmatrix}
0 \\
0 \\
-f_{L,x} \\
-f_{L,y}
\end{bmatrix}

Here the block matrix is called the incidence matrix $$\BA$$, and we write

\label{eqn:multiphysicsL12:160}
A \Bf = \Bf_L.

### Constitutive laws

Given a pair of nodes as in fig. 2.

fig. 2. Strut spanning nodes

each component has an equation relating the reaction forces of that strut based on the positions

\label{eqn:multiphysicsL12:180}
f_{c,x} = S_x \lr{ x_1 – x_2, y_1 – y_2, p_c }

\label{eqn:multiphysicsL12:200}
f_{c,y} = S_y \lr{ x_1 – x_2, y_1 – y_2, p_c },

where $$p_c$$ represent the parameters of the system. We write

\label{eqn:multiphysicsL12:220}
\Bf =
\begin{bmatrix}
f_{A,x} \\
f_{A,y} \\
f_{B,x} \\
f_{B,y} \\
\vdots
\end{bmatrix}
=
\begin{bmatrix}
S_x \lr{ x_1 – x_2, y_1 – y_2, p_c } \\
S_y \lr{ x_1 – x_2, y_1 – y_2, p_c } \\
\vdots
\end{bmatrix},

or

\label{eqn:multiphysicsL12:260}
\Bf = S(\Br)

### Putting the pieces together

The node branch formulation is

\label{eqn:multiphysicsL12:280}
\begin{aligned}
A \Bf – \Bf_L &= 0 \\
\Bf – S(\Br) &= 0
\end{aligned}

We’ll want to approximate this system using the Jacobian methods discussed, and can expect the cost of that Jacobian calculation to potentially be expensive. To move to the nodal formulation we eliminate forces (the equivalent of currents in this system)

\label{eqn:multiphysicsL12:320}
A S(\Br) – \Bf_L = 0

We cannot use this nodal formulation when we have struts that are so stiff that the positions of some of the nodes are fixed, but can work around that as before by introducing an additional unknown for each component of such a strut.

## Damped Newton’s method

We want to be able to deal with the oscillation that we can have in examples like that of fig. 3.

fig. 3. Oscillatory Newton’s iteration

Large steps can be dangerous. We want to modify Newton’s method as follows

Our algorithm is

• Guess $$\Bx^0, k = 0$$.
• REPEAT
• Compute $$F$$ and $$J_F$$ at $$\Bx^k$$
• Solve linear system $$J_F(\Bx^k) \Delta \Bx^k = – F(\Bx^k)$$
• $$\Bx^{k+1} = \Bx^k + \alpha^k \Delta \Bx^k$$
• $$k = k + 1$$
• UNTIL converged

with $$\alpha^k = 1$$ we have standard Newton’s method. We want to pick $$\alpha^k$$ so that we minimize

## Continuation parameters

Newton’s method converges given a close initial guess. We can generate a sequence of problems where the previous problem generates a good initial guess for the next problem.

An example is a heat conducting bar, with a final heat distribution. We can start the numeric iteration with $$T = 0$$, and gradually increase the temperatures until we achieve the final desired heat distribution.

Suppose that we want to solve

\label{eqn:multiphysicsL12:340}
F(\Bx) = 0.

We modify this problem by introducing

\label{eqn:multiphysicsL12:360}
\tilde{F}(\Bx(\lambda), \lambda) = 0,

where

• $$\tilde{F}(\Bx(0), 0) = 0$$ is easy to solve
• $$\tilde{F}(\Bx(1), 1) = 0$$ is equivalent to $$F(\Bx) = 0$$.
• (more on slides)

The source load stepping algorithm is

• Solve $$\tilde{F}(\Bx(0), 0) = 0$$, with $$\Bx(\lambda_{\text{prev}} = \Bx(0)$$
• (more on slides)

This can still have problems, for example, when the parameterization is multivalued as in fig. 4.

fig. 4. Multivalued parameterization

We can attempt to adjust $$\lambda$$ so that we move along the parameterization curve.

## ECE1254H Modeling of Multiphysics Systems. Lecture 11: Nonlinear equations. Taught by Prof. Piero Triverio

October 22, 2014 ece1254 , , ,

ECE1254H Modeling of Multiphysics Systems. Lecture 11: Nonlinear equations. Taught by Prof. Piero Triverio

## Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

## Solution of N nonlinear equations in N unknowns

We’d now like to move from solutions of nonlinear functions in one variable:

\label{eqn:multiphysicsL11:200}
f(x^\conj) = 0,

to multivariable systems of the form

\label{eqn:multiphysicsL11:20}
\begin{aligned}
f_1(x_1, x_2, \cdots, x_N) &= 0 \\
\vdots & \\
f_N(x_1, x_2, \cdots, x_N) &= 0 \\
\end{aligned},

where our unknowns are
\label{eqn:multiphysicsL11:40}
\Bx =
\begin{bmatrix}
x_1 \\
x_2 \\
\vdots \\
x_N \\
\end{bmatrix}.

Form the vector $$F$$

\label{eqn:multiphysicsL11:60}
F(\Bx) =
\begin{bmatrix}
f_1(x_1, x_2, \cdots, x_N) \\
\vdots \\
f_N(x_1, x_2, \cdots, x_N) \\
\end{bmatrix},

so that the equation to solve is

\label{eqn:multiphysicsL11:80}
\boxed{
F(\Bx) = 0.
}

The Taylor expansion of $$F$$

around point $$\Bx_0$$ is

\label{eqn:multiphysicsL11:100}
F(\Bx) = F(\Bx_0) +
\underbrace{ J_F(\Bx_0) }_{Jacobian}
\lr{ \Bx – \Bx_0},

where the Jacobian is

\label{eqn:multiphysicsL11:120}
J_F(\Bx_0)
=
\begin{bmatrix}
\PD{x_1}{f_1} & \cdots & \PD{x_N}{f_1} \\
& \ddots & \\
\PD{x_1}{f_N} & \cdots & \PD{x_N}{f_N}
\end{bmatrix}

## Multivariable Newton’s iteration

Given $$\Bx^k$$, expand $$F(\Bx)$$ around $$\Bx^k$$

\label{eqn:multiphysicsL11:140}
F(\Bx) \approx F(\Bx^k) + J_F(\Bx^k) \lr{ \Bx – \Bx^k }

With the approximation

\label{eqn:multiphysicsL11:160}
0 = F(\Bx^k) + J_F(\Bx^k) \lr{ \Bx^{k + 1} – \Bx^k },

then multiplying by the inverse Jacobian, and rearranging, we have

\label{eqn:multiphysicsL11:220}
\boxed{
\Bx^{k+1} = \Bx^k – J_F^{-1}(\Bx^k) F(\Bx^k).
}

Our algorithm is

• Guess $$\Bx^0, k = 0$$.
• REPEAT
• Compute $$F$$ and $$J_F$$ at $$\Bx^k$$
• Solve linear system $$J_F(\Bx^k) \Delta \Bx^k = – F(\Bx^k)$$
• $$\Bx^{k+1} = \Bx^k + \Delta \Bx^k$$
• $$k = k + 1$$
• UNTIL converged

As with one variable, our convergence is after we have all of the convergence conditions satisfied

\label{eqn:multiphysicsL11:240}
\begin{aligned}
\Norm{ \Delta \Bx^k } &< \epsilon_1 \\
\Norm{ F(\Bx^{k+1}) } &< \epsilon_2 \\
\frac{\Norm{ \Delta \Bx^k }}{\Norm{\Bx^{k+1}}} &< \epsilon_3 \\
\end{aligned}

Typical termination is some multiple of eps, where eps is the machine precision. This may be something like:

\label{eqn:multiphysicsL11:260}
4 \times N \times \text{eps},

where $$N$$ is the “size of the problem”. Sometimes we may be able to find meaningful values for the problem. For example, for a voltage problem, we may not be interested in precisions greater than a millivolt.

## Automatic assembly of equations for nolinear system

### Nonlinear circuits

We will start off considering a non-linear resistor, designated within a circuit as sketched in fig. 2.

fig. 2. Non-linear resistor

Example: diode, with $$i = g(v)$$, such as

\label{eqn:multiphysicsL11:280}
i = I_0 \lr{ e^{v/{\eta V_T}} – 1 }.

Consider the example circuit of fig. 3. KCL’s at each of the nodes are

fig. 3. Example circuit

1. $$I_A + I_B + I_D – I_s = 0$$
2. $$– I_B + I_C – I_D = 0$$

Introducing the consistuative equations this is

1. $$g_A(V_1) + g_B(V_1 – V_2) + g_D (V_1 – V_2) – I_s = 0$$
2. $$– g_B(V_1 – V_2) + g_C(V_2) – g_D (V_1 – V_2) = 0$$

In matrix form this is
\label{eqn:multiphysicsL11:300}
\begin{bmatrix}
g_D & -g_D \\
-g_D & g_D
\end{bmatrix}
\begin{bmatrix}
V_1 \\
V_2
\end{bmatrix}
+
\begin{bmatrix}
g_A(V_1) &+ g_B(V_1 – V_2) & & – I_s \\
&- g_B(V_1 – V_2) & + g_C(V_2) & \\
\end{bmatrix}
=
0
.

We can write the entire system as

\label{eqn:multiphysicsL11:320}
\boxed{
F(\Bx) = G \Bx + F'(\Bx) = 0.
}

The first term, a product of a nodal matrix $$G$$ represents the linear subnetwork, and is filled with the stamps we are already familiar with.

The second term encodes the relationships of the nonlinear subnetwork. This non-linear component has been marked with a prime to distinguish it from the complete network function that includes both linear and non-linear elements.

Observe the similarity with the stamp analysis that we did previously. With $$g_A()$$ connected on one end to ground we have it only once in the resulting vector, whereas the nonlinear elements connected to two non-zero nodes in the network occur once with each sign.

### Stamp for nonlinear resistor

For the non-linear circuit element of fig. 4.

fig. 4. Non-linear resistor circuit element

The stamp is

### Stamp for Jacobian

\label{eqn:multiphysicsL11:360}
J_F(\Bx^k) = G + J_{F’}(\Bx^k).

Here the stamp for the Jacobian, an $$N \times N$$ matrix, is

## ECE1254H Modeling of Multiphysics Systems. Lecture 10: Nonlinear systems. Taught by Prof. Piero Triverio

October 21, 2014 ece1254 ,

## Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

## Nonlinear systems

On slides, some examples to motivate:

• struts
• fluids
• diode (exponential)Example in fig. 1.

fig. 1. Diode circuit

\label{eqn:multiphysicsL10:20}
I_d = I_s \lr{ e^{V_d/V_t} – 1 } = \frac{10 – V_d}{10}.

## Richardson and Linear Convergence

Seeking the exact solution $$x^\conj$$ for

\label{eqn:multiphysicsL10:40}
f(x^\conj) = 0,

Suppose that

\label{eqn:multiphysicsL10:60}
x^{k + 1} = x^k + f(x^k)

If $$f(x^k) = 0$$ then we have convergence, so $$x^k = x^\conj$$.

### Convergence analysis

Write the iteration equations at a sample point and the solution as

\label{eqn:multiphysicsL10:80}
x^{k + 1} = x^k + f(x^k)

\label{eqn:multiphysicsL10:100}
x^{\conj} = x^\conj +
\underbrace{
f(x^\conj)
}_{=0}

Taking the difference we have

\label{eqn:multiphysicsL10:120}
x^{k+1} – x^\conj = x^k – x^\conj + \lr{ f(x^k) – f(x^\conj) }.

The last term can be quantified using the mean value theorem \ref{thm:multiphysicsL10:140}, giving

\label{eqn:multiphysicsL10:140}
x^{k+1} – x^\conj
= x^k – x^\conj +
\evalbar{\PD{x}{f}}{\tilde{x}} \lr{ x^k – x^\conj }
=
\lr{ x^k – x^\conj }
\lr{
1 + \evalbar{\PD{x}{f}}{\tilde{x}} }.

The absolute value is thus

\label{eqn:multiphysicsL10:160}
\Abs{x^{k+1} – x^\conj } =
\Abs{ x^k – x^\conj }
\Abs{
1 + \evalbar{\PD{x}{f}}{\tilde{x}} }.

We have convergence provided $$\Abs{ 1 + \evalbar{\PD{x}{f}}{\tilde{x}} } < 1$$ in the region where we happen to iterate over. This could easily be highly dependent on the initial guess.

Stated more accurately we have convergence provided

\label{eqn:multiphysicsL10:180}
\Abs{
1 + \PD{x}{f} }
\le \gamma < 1
\qquad \forall \tilde{x} \in [ x^\conj – \delta, x^\conj + \delta ],

and $$\Abs{x^0 – x^\conj } < \delta$$. This is illustrated in fig. 3.

fig. 3. Convergence region

It could very easily be difficult to determine the convergence regions.

We have some problems

• Convergence is only linear
• $$x, f(x)$$ are not in the same units (and potentially of different orders). For example, $$x$$ could be a voltage and $$f(x)$$ could be a circuit current.
• (more on slides)

Examples where we may want to use this:

• Spice Gummal Poon transistor model. Lots of diodes, …
• Mosfet model (30 page spec, lots of parameters).

## Newton’s method

The core idea of this method is sketched in fig. 4. To find the intersection with the x-axis, we follow the slope closer to the intersection.

fig. 4. Newton’s method

To do this, we expand $$f(x)$$ in Taylor series to first order around $$x^k$$, and then solve for $$f(x) = 0$$ in that approximation

\label{eqn:multiphysicsL10:200}
f( x^{k+1} ) \approx f( x^k ) + \evalbar{ \PD{x}{f} }{x^k} \lr{ x^{k+1} – x^k } = 0.

This gives

\label{eqn:multiphysicsL10:220}
\boxed{x^{k+1} = x^k – \frac{f( x^k )}{\evalbar{ \PD{x}{f} }{x^k}}}

### Example: Newton’s method

For the solution of

\label{eqn:multiphysicsL10:260}
f(x) = x^3 – 2,

it was found (table 1.1)

The error tails off fast as illustrated roughly in fig. 6.

fig. 6. Error by iteration

### Convergence analysis

The convergence condition is

\label{eqn:multiphysicsL10:280}
0 = f(x^k) + \evalbar{ \PD{x}{f} }{x^k} \lr{ x^{k+1} – x^k }.

The Taylor series for $$f$$ around $$x^k$$, using a mean value formulation is

\label{eqn:multiphysicsL10:300}
f(x)
= f(x^k)
+ \evalbar{ \PD{x}{f} }{x^k} \lr{ x – x^k }.
+ \inv{2} \evalbar{ \PDSq{x}{f} }{\tilde{x} \in [x^\conj, x^k]} \lr{ x – x^k }^2.

Evaluating at $$x^\conj$$ we have

\label{eqn:multiphysicsL10:320}
0 = f(x^k)
+ \evalbar{ \PD{x}{f} }{x^k} \lr{ x^\conj – x^k }.
+ \inv{2} \evalbar{ \PDSq{x}{f} }{\tilde{x} \in [x^\conj, x^k]} \lr{ x^\conj – x^k }^2.

and subtracting this from \ref{eqn:multiphysicsL10:280} we are left with

\label{eqn:multiphysicsL10:340}
0 = \evalbar{\PD{x}{f}}{x^k} \lr{ x^{k+1} – x^k – x^\conj + x^k }
– \inv{2} \evalbar{\PDSq{x}{f}}{\tilde{x}} \lr{ x^\conj – x^k }^2.

Solving for the difference from the solution, the error is

\label{eqn:multiphysicsL10:360}
x^{k+1} – x^\conj
= \inv{2} \lr{ \PD{x}{f} }^{-1} \evalbar{\PDSq{x}{f}}{\tilde{x}} \lr{ x^k – x^\conj }^2,

or in absolute value

\label{eqn:multiphysicsL10:380}
\Abs{ x^{k+1} – x^\conj }
= \inv{2} \Abs{ \PD{x}{f} }^{-1} \Abs{ \PDSq{x}{f} } \Abs{ x^k – x^\conj }^2.

We see that convergence is quadratic in the error from the previous iteration. We will have trouble if the derivative goes small at any point in the iteration region, for example in fig. 7, we could easily end up in the zero derivative region.

fig. 7. Newton’s method with small derivative region

### When to stop iteration

One way to check is to look to see if the difference

\label{eqn:multiphysicsL10:420}
\Norm{ x^{k+1} – x^k } < \epsilon_{\Delta x},

however, when the function has a very step slope

this may not be sufficient unless we also substitute our trial solution and see if we have the match desired.

Alternatively, if the slope is shallow as in fig. 7, then checking for just $$\Abs{ f(x^{k+1} } < \epsilon_f$$ may also mean we are off target.

Finally, we may also need a relative error check to avoid false convergence. In fig. 10, we may have both

fig. 10. Possible relative error difference required

\label{eqn:multiphysicsL10:440}
\Abs{x^{k+1} – x^k} < \epsilon_{\Delta x}

\label{eqn:multiphysicsL10:460}
\Abs{f(x^{k+1}) } < \epsilon_{f},

however, we may also want a small relative difference

\label{eqn:multiphysicsL10:480}
\frac{\Abs{x^{k+1} – x^k}}{\Abs{x^k}}
< \epsilon_{f,r}.

This can become problematic in real world engineering examples such as to diode of fig. 11, where we have shallow regions and fast growing or dropping regions.

fig. 11. Diode current curve

## Theorems

### Mean value theorem

For a continuous and differentiable function $$f(x)$$, the difference can be expressed in terms of the derivative at an intermediate point

\begin{equation*}
f(x_2) – f(x_1)
= \evalbar{ \PD{x}{f} }{\tilde{x}} \lr{ x_2 – x_1 }
\end{equation*}

where $$\tilde{x} \in [x_1, x_2]$$.

This is illustrated (roughly) in fig. 2.

fig. 2. Mean value theorem illustrated