[Click here for a PDF of this post with nicer formatting]


Peeter’s lecture notes from class. These may be incoherent and rough.

Backward Euler method

Discretized time dependent partial differential equations were seen to have the form

G \Bx(t) + C \dot{\Bx}(t) = B \Bu(t),

where \( G, C, B \) are matrices, and \( \Bu(t) \) is a vector of sources.

The backward Euler method augments \ref{eqn:multiphysicsL14:20} with an initial condition. For a one dimensional system such an initial condition could a zero time specification

G x(t) + C \dot{x}(t) = B u(t),
x(0) = x_0

Discretizing time as in fig. 1.


fig. 1. Discretized time

The discrete derivative, using a backward difference, is

\dot{x}(t = t_n) \approx \frac{ x_n – x_{n-1} }{\Delta t}

Evaluating \ref{eqn:multiphysicsL14:40} at \( t = t_n \) is

G x_n + C \dot{x}(t = t_n) = B u(t_n),

or approximately

G x_n + C \frac{x_n – x_{n-1}}{\Delta t} = B u(t_n).


\lr{ G + \frac{C}{\Delta t} } x_n = \frac{C}{\Delta t} x_{n-1}
B u(t_n).

Assuming that matrices \( G, C \) are constant, and \( \Delta t \) is fixed, a matrix inversion can be avoided, and a single LU decomposition can be used. For \( N \) sampling points (not counting \( t_0 = 0 \)), \( N \) sets of backward and forward substitutions will be required to compute \( x_1 \) from \( x_0 \), and so forth.

Backwards Euler is an implicit method.

Recall that the forward Euler method gave

x_{n+1} =
x_n \lr{ I – C^{-1} \Delta t G }
+ C^{-1} \Delta t B u(t_n)

This required

  • \( C \) must be invertible.
  • \( C \) must be cheap to invert, perhaps \( C = I \), so that
    x_{n+1} =
    \lr{ I – \Delta t G } x_n
    + \Delta t B u(t_n)
  • This is an explicit method
  • This can be cheap but unstable.

Trapezoidal rule (TR)

The derivative can be approximated using an average of the pair of derivatives as illustrated in fig. 2.


fig. 2. Trapezoidal derivative approximation

\frac{x_n – x_{n-1}}{\Delta t} \approx \frac{

Application to \ref{eqn:multiphysicsL14:40} for \( t_{n-1}, t_n \) respectively gives

G x_{n-1} + C \dot{x}(t_{n-1}) &= B u(t_{n-1}) \\
G x_{n} + C \dot{x}(t_{n}) &= B u(t_{n}) \\

Averaging these

G \frac{ x_{n-1} + x_n }{2} + C
= B
u(t_{n}) }{2},

and inserting the trapezoidal approximation

G \frac{ x_{n-1} + x_n }{2}
x_{n} –
}{\Delta t}
= B
u(t_{n}) }{2},

and a final rearrangement yields

\lr{ G + \frac{2}{\Delta t} C } x_n

\lr{ G – \frac{2}{\Delta t} C } x_{n_1}
+ B
u(t_{n}) }.

This is

  • also an implicit method.
  • requires LU of \( G – 2 C /\Delta t \).
  • more accurate than BE, for the same computational cost.

In all of these methods, accumulation of error is something to be very careful of, and in some cases such error accumulation can even be exponential.

This is effectively a way to introduce central differences. On the slides this is seen to be more effective at avoiding either artificial damping and error accumulation that can be seen in backwards and forwards Euler method respectively.