[Click here for a PDF of this post with nicer formatting]


Peeter’s lecture notes from class. These may be incoherent and rough.

Nonlinear systems

On slides, some examples to motivate:

  • struts
  • fluids
  • diode (exponential)Example in fig. 1.

    fig. 1. Diode circuit


    I_d = I_s \lr{ e^{V_d/V_t} – 1 } = \frac{10 – V_d}{10}.

Richardson and Linear Convergence

Seeking the exact solution \( x^\conj \) for

f(x^\conj) = 0,

Suppose that

x^{k + 1} = x^k + f(x^k)

If \( f(x^k) = 0 \) then we have convergence, so \( x^k = x^\conj \).

Convergence analysis

Write the iteration equations at a sample point and the solution as

x^{k + 1} = x^k + f(x^k)
x^{\conj} = x^\conj +

Taking the difference we have

x^{k+1} – x^\conj = x^k – x^\conj + \lr{ f(x^k) – f(x^\conj) }.

The last term can be quantified using the mean value theorem \ref{thm:multiphysicsL10:140}, giving

x^{k+1} – x^\conj
= x^k – x^\conj +
\evalbar{\PD{x}{f}}{\tilde{x}} \lr{ x^k – x^\conj }
\lr{ x^k – x^\conj }
1 + \evalbar{\PD{x}{f}}{\tilde{x}} }.

The absolute value is thus

\Abs{x^{k+1} – x^\conj } =
\Abs{ x^k – x^\conj }
1 + \evalbar{\PD{x}{f}}{\tilde{x}} }.

We have convergence provided \( \Abs{ 1 + \evalbar{\PD{x}{f}}{\tilde{x}} } < 1 \) in the region where we happen to iterate over. This could easily be highly dependent on the initial guess.

Stated more accurately we have convergence provided

1 + \PD{x}{f} }
\le \gamma < 1
\qquad \forall \tilde{x} \in [ x^\conj – \delta, x^\conj + \delta ],

and \( \Abs{x^0 – x^\conj } < \delta \). This is illustrated in fig. 3.


fig. 3. Convergence region


It could very easily be difficult to determine the convergence regions.

We have some problems

  • Convergence is only linear
  • \( x, f(x) \) are not in the same units (and potentially of different orders). For example, \(x\) could be a voltage and \( f(x) \) could be a circuit current.
  • (more on slides)

Examples where we may want to use this:

  • Spice Gummal Poon transistor model. Lots of diodes, …
  • Mosfet model (30 page spec, lots of parameters).

Newton’s method

The core idea of this method is sketched in fig. 4. To find the intersection with the x-axis, we follow the slope closer to the intersection.


fig. 4. Newton’s method


To do this, we expand \( f(x) \) in Taylor series to first order around \( x^k \), and then solve for \( f(x) = 0 \) in that approximation

f( x^{k+1} ) \approx f( x^k ) + \evalbar{ \PD{x}{f} }{x^k} \lr{ x^{k+1} – x^k } = 0.

This gives

\boxed{x^{k+1} = x^k – \frac{f( x^k )}{\evalbar{ \PD{x}{f} }{x^k}}}

Example: Newton’s method

For the solution of

f(x) = x^3 – 2,

it was found (table 1.1)

The error tails off fast as illustrated roughly in fig. 6.


fig. 6. Error by iteration

Convergence analysis

The convergence condition is

0 = f(x^k) + \evalbar{ \PD{x}{f} }{x^k} \lr{ x^{k+1} – x^k }.

The Taylor series for \( f \) around \( x^k \), using a mean value formulation is

= f(x^k)
+ \evalbar{ \PD{x}{f} }{x^k} \lr{ x – x^k }.
+ \inv{2} \evalbar{ \PDSq{x}{f} }{\tilde{x} \in [x^\conj, x^k]} \lr{ x – x^k }^2.

Evaluating at \( x^\conj \) we have

0 = f(x^k)
+ \evalbar{ \PD{x}{f} }{x^k} \lr{ x^\conj – x^k }.
+ \inv{2} \evalbar{ \PDSq{x}{f} }{\tilde{x} \in [x^\conj, x^k]} \lr{ x^\conj – x^k }^2.

and subtracting this from \ref{eqn:multiphysicsL10:280} we are left with

0 = \evalbar{\PD{x}{f}}{x^k} \lr{ x^{k+1} – x^k – x^\conj + x^k }
– \inv{2} \evalbar{\PDSq{x}{f}}{\tilde{x}} \lr{ x^\conj – x^k }^2.

Solving for the difference from the solution, the error is

x^{k+1} – x^\conj
= \inv{2} \lr{ \PD{x}{f} }^{-1} \evalbar{\PDSq{x}{f}}{\tilde{x}} \lr{ x^k – x^\conj }^2,

or in absolute value

\Abs{ x^{k+1} – x^\conj }
= \inv{2} \Abs{ \PD{x}{f} }^{-1} \Abs{ \PDSq{x}{f} } \Abs{ x^k – x^\conj }^2.

We see that convergence is quadratic in the error from the previous iteration. We will have trouble if the derivative goes small at any point in the iteration region, for example in fig. 7, we could easily end up in the zero derivative region.


fig. 7. Newton’s method with small derivative region


When to stop iteration

One way to check is to look to see if the difference

\Norm{ x^{k+1} – x^k } < \epsilon_{\Delta x},

however, when the function has a very step slope

this may not be sufficient unless we also substitute our trial solution and see if we have the match desired.

Alternatively, if the slope is shallow as in fig. 7, then checking for just \( \Abs{ f(x^{k+1} } < \epsilon_f \) may also mean we are off target.

Finally, we may also need a relative error check to avoid false convergence. In fig. 10, we may have both


fig. 10. Possible relative error difference required


\Abs{x^{k+1} – x^k} < \epsilon_{\Delta x}
\Abs{f(x^{k+1}) } < \epsilon_{f},

however, we may also want a small relative difference

\frac{\Abs{x^{k+1} – x^k}}{\Abs{x^k}}
< \epsilon_{f,r}.

This can become problematic in real world engineering examples such as to diode of fig. 11, where we have shallow regions and fast growing or dropping regions.


fig. 11. Diode current curve



Mean value theorem

For a continuous and differentiable function \( f(x) \), the difference can be expressed in terms of the derivative at an intermediate point

f(x_2) – f(x_1)
= \evalbar{ \PD{x}{f} }{\tilde{x}} \lr{ x_2 – x_1 }

where \( \tilde{x} \in [x_1, x_2] \).

This is illustrated (roughly) in fig. 2.


fig. 2. Mean value theorem illustrated