Numerical Methods for Differential Equations...Definition x is called a fixed point of the function...
Transcript of Numerical Methods for Differential Equations...Definition x is called a fixed point of the function...
Numerical Methods for Differential Equations
Chapter 2: Runge–Kutta and Linear Multistep methods
Gustaf Soderlind and Carmen Arevalo
Numerical Analysis, Lund University
Textbooks: A First Course in the Numerical Analysis of Differential Equations, by Arieh Iserles
and Introduction to Mathematical Modelling with Differential Equations, by Lennart Edsberg
c© Gustaf Soderlind, Numerical Analysis, Mathematical Sciences, Lund University, 2008-09
Numerical Methods for Differential Equations – p. 1/63
Chapter 2: contents
◮ Solving nonlinear equations
◮ Fixed points
◮ Newton’s method
◮ Quadrature
◮ Runge–Kutta methods
◮ Embedded RK methods and adaptivity
◮ Implicit Runge–Kutta methods
◮ Stability and the stability function
◮ Linear multistep methods
Numerical Methods for Differential Equations – p. 2/63
1. Solving nonlinear equations f(x) = 0
We can have a single equation
x− cos(x) = 0
or a system
4x2 − y2 = 0
4xy2 − x = 1
Nonlinear equations may have
no solution; one solution; any finite number of solutions;infinitely many solutions
Numerical Methods for Differential Equations – p. 3/63
Iteration and convergence
Nonlinear equations are solved iteratively. One computesa sequence {x[k]} of approximations to the root x∗
For f(x∗) = 0 , let e[k] = x[k] − x∗
Definition The method converges if limk→∞ ‖e[k]‖ = 0
Definition The convergence is
◮ linear if ‖e[k+1]‖ ≤ c · ‖e[k]‖ with 0 < c < 1
◮ quadratic if ‖e[k+1]‖ ≤ c · ‖e[k]‖p with p = 2
◮ superlinear if p > 1;
◮ cubic if p = 3, etc.
Numerical Methods for Differential Equations – p. 4/63
2. Fixed points
Definition x is called a fixed point of the function g if
x = g(x)
Definition A function g is called contractive if
‖g(x) − g(y)‖ ≤ L[g] · ‖x − y‖
with L[g] < 1 for all x, y in the domain of g
Numerical Methods for Differential Equations – p. 5/63
Fixed Point Theorem
Theorem Assume that g is continuously differentiable onthe compact interval I, i.e., g ∈ C1(I)
◮ If g : I → I there exists an x∗ ∈ I such that x∗ = g(x∗)
◮ If in addition L[g] < 1 on I, then x∗ is unique, and . . .
◮ . . . the iteration
xn+1 = g(xn)
converges to the fixed point x∗ for all x0 ∈ I
Note Both conditions are absolutely essential!
Numerical Methods for Differential Equations – p. 6/63
Fixed Point Theorem. Existence and uniqueness
0 0.5 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 0.5 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 0.5 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Left No condition satisfied: maybe no x∗
Center First condition satisfied: maybe multiple x∗
Right Both conditions satisfied: unique x∗
Numerical Methods for Differential Equations – p. 7/63
Fixed point iteration. x = e−x; g(x) = e−x
x[k+1] = e−x[k]; x[0] = 0
g : [0, 1] → [0, 1]; |g′(x)| = e−x < 1 for x > 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Numerical Methods for Differential Equations – p. 8/63
Error estimation in fixed point iteration
Assume ‖g′(x)‖ ≤ L < 1 for all x ∈ I.
x[k+1] = g(x[k])
x∗ = g(x∗)
By the mean value theorem,
x[k+1] − x∗ = g(x[k]) − g(x∗)
= g(x[k]) − g(x[k+1]) + g(x[k+1]) − g(x∗)
‖x[k+1] − x∗‖ ≤ L · ‖x[k] − x[k+1]‖ + L · ‖x[k+1] − x∗‖
Numerical Methods for Differential Equations – p. 9/63
Error estimation . . .
(1 − L) · ‖x[k+1] − x∗‖ ≤ L · ‖x[k] − x[k+1]‖
Error estimate (computable error bound)
‖x[k+1] − x∗‖ ≤ L
1 − L‖x[k] − x[k+1]‖
Theorem If L[g] < 1, then the error in fixed point iterationis bounded by
‖x[k+1] − x∗‖ ≤ L[g]
1 − L[g]‖x[k] − x[k+1]‖
Numerical Methods for Differential Equations – p. 10/63
Example: The trapezoidal rule
Approximate the solution to y′ = −y2 cos t, y(0) = 1/2, t ∈ [0, 8π].
Taking 96 steps, solve nonlinear equations with one (left) and
four (right) fixed point iterations. Graphs show absolute error
0 10 20 300
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0 10 20 300
0.001
0.002
0.003
0.004
0.005
0.006
0.007
0.008
0.009
0.01
Numerical Methods for Differential Equations – p. 11/63
3. Newton’s method
Newton’s method solves f(x) = 0 using repeatedlinearizations. Linearize at the point (x[k], f(x[k]))!
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 20
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Numerical Methods for Differential Equations – p. 12/63
Newton’s method . . .
Straight line equation
y − f(x[k]) = f ′(x[k]) · (x− x[k])
Define x = x[k+1] ⇒ y = 0, so that
−f(x[k]) = f ′(x[k]) · (x[k+1] − x[k])
Solve for x = x[k+1], to get Newton’s method
x[k+1] = x[k] − f(x[k])
f ′(x[k])
Numerical Methods for Differential Equations – p. 13/63
Newton’s method. Alternative derivation
Expand f(x[k+1]) in a Taylor series around x[k]
f(x[k+1]) = f(x[k] + (x[k+1] − x[k]))
≈ f(x[k]) + f ′(x[k]) · (x[k+1] − x[k]) := 0
Newton’s method
x[k+1] = x[k] − f(x[k])
f ′(x[k])
Numerical Methods for Differential Equations – p. 14/63
Newton’s method. Alternative derivation . . .
This holds also when f is vector valued (equation systems)
f(x[k]) + f ′(x[k]) · (x[k+1] − x[k]) := 0
⇒
x[k+1] = x[k] −(
f ′(x[k]))
−1f(x[k])
Definition f ′(x[k]) is the Jacobian matrix of f , defined by
f ′(x) =
{
∂fi
∂xj
}
Numerical Methods for Differential Equations – p. 15/63
Newton’s method. Convergence
Write Newton’s method as a fixed point iteration withiteration function g(x) := x− f(x)/f ′(x)
x[k+1] = g(x[k])
Note Newton’s method converges fast if f ′(x∗) 6= 0, asg′(x∗) = f(x∗)f ′′(x∗)/f ′(x∗)2 = 0
Expand g(x) in a Taylor series around x∗
g(x[k]) − g(x∗) ≈ g′(x∗)(x[k] − x∗) +g′′(x∗)
2(x[k] − x∗)2
x[k+1] − x∗ ≈ g′′(x∗)
2(x[k] − x∗)2
Numerical Methods for Differential Equations – p. 16/63
Newton’s method. Convergence . . .
Define the error by ε[k] = x[k] − x∗, then
ε[k+1] ∼(
ε[k])2
Newton’s method is quadratically convergent!
Fixed point iterations are typically only linearly convergent
ε[k+1] ∼ ε[k]
Numerical Methods for Differential Equations – p. 17/63
Implicit Euler. Newton vs Fixed point
As yn+1 = yn + hf(yn+1) we need to solve an equation
y = hf(y) + ψ
Note All implict methods lead to an equation of this form!
Theorem Fixed point iterations converge if L[hf ] < 1,restricting the step size to h < 1/L[f ] !
Stiff equations have L[hf ] ≫ 1 so fixed point iterations willnot converge; it is necessary to use Newton’s method!
Numerical Methods for Differential Equations – p. 18/63
Convergence order and rate
Definition The convergence order is p with (asymptotic)error constant Cp, if
0 < limk→∞
‖ε[k+1]‖‖ε[k]‖p
= Cp <∞
Special cases:
p = 1 Linear convergence
Example Fixed point iteration Cp = |g′(x∗)|p = 2 Quadratic convergence
Example Newton iteration Cp =∣
∣
f ′′(x∗)2f ′(x∗)
∣
∣
Numerical Methods for Differential Equations – p. 19/63
4. Quadrature (integration) formulas
Numerical “quadrature” is the approximation of definite integrals
I(f) =
∫ b
af(x)dx
We substitute the “infinite sum” by a finite sum: the integrand is
sampled at a finite number of points
I(f) =
n∑
i=1
wif(xi) + Rn
Rn = I(f) − ∑ni=1 wif(xi) is the quadrature error. Numerical
method
I(f) ≈n
∑
i=1
wif(xi)
Numerical Methods for Differential Equations – p. 20/63
5. Runge–Kutta methods
To solve the IVP y′ = f(t, y), t ≥ t0, y(t0) = y0 we use a
quadrature formula to approximate the integral in
y(tn+1) = y(tn) +
∫ tn+1
tn
f(τ, y(τ))dτ
y(tn+1) ≈ y(tn) + h
s∑
j=1
bjf(tn + cjh, y(tn + cjh))
Let {Yj} denote the numerical approximations to {y(tn + cjh)}A Runge–Kutta method then has the form
yn+1 = yn + hs
∑
j=1
bjf(tn + cjh, Yj)
Numerical Methods for Differential Equations – p. 21/63
Stage values and stage derivatives
The vectors Yj are called stage values
The vectors Y ′
j = f(tn + cjh, Yj) are called stage derivatives
Stage values and stage derivatives are related through
Yi = yn + h
i−1∑
j=1
ai,jY′
j
and the method advances one step through
yn+1 = yn + hs
∑
j=1
bjY′
j
Numerical Methods for Differential Equations – p. 22/63
The RK matrix, weights, nodes and stages
A = {aij} is the RK matrix, b = [b1 b2 · · · bs]T is the weight vector,
c = [c1 c2 · · · cs]T are the nodes, and the method has s stages
The Butcher tableau of an explicit RK method is
0 0 0 · · · 0
c2 a21 0 · · · 0...
.... . .
...
cs as1 as2 · · · 0
b1 b2 · · · bs
orc A
bT
Simplifying assumption: ci =i
∑
j=1
ai,j i = 2, 3, . . . , s
Numerical Methods for Differential Equations – p. 23/63
Simplest case: a 2-stage ERK
Y ′
1 = f(tn, yn)
Y ′
2 = f(tn + c2h, yn + h a21Y′
1)
yn+1 = yn + h[b1Y′
1 + b2Y′
2 ]
0 0 0
c2 a21 0
b1 b2
c2 = a21
Numerical Methods for Differential Equations – p. 24/63
2-stage ERK . . .
Using Y ′
1 = f(tn, yn), expand in Taylor series around tn, yn:
Y ′
2 = f(tn + c2h, yn + h a21f(tn, yn))
= f + h [c2ft + a21fyf ] + O(h2)
Numerical Methods for Differential Equations – p. 25/63
2-stage ERK . . .
Using Y ′
1 = f(tn, yn), expand in Taylor series around tn, yn:
Y ′
2 = f(tn + c2h, yn + h a21f(tn, yn))
= f + h [c2ft + a21fyf ] + O(h2)
Inserting into yn+1 = yn + h[b1Y′
1 + b2Y′
2 ], we get
yn+1 = yn + h(b1 + b2)f + h2b2(c2ft + a21fyf) + O(h3)
Numerical Methods for Differential Equations – p. 25/63
2-stage ERK . . .
Using Y ′
1 = f(tn, yn), expand in Taylor series around tn, yn:
Y ′
2 = f(tn + c2h, yn + h a21f(tn, yn))
= f + h [c2ft + a21fyf ] + O(h2)
Inserting into yn+1 = yn + h[b1Y′
1 + b2Y′
2 ], we get
yn+1 = yn + h(b1 + b2)f + h2b2(c2ft + a21fyf) + O(h3)
Taylor expansion of the exact solution:
y′ = f
y′′ = ft + fyy′ = ft + fyf
y(t + h) = y + hf +1
2h2(ft + fyf) + O(h3)
Numerical Methods for Differential Equations – p. 25/63
2-stage ERK . . .
Matching terms in
yn+1 = yn + h(b1 + b2)f + h2b2(c2ft + a21fyf) + O(h3)
y(t + h) = y + hf +1
2h2(ft + fyf) + O(h3)
and taking c2 = a21 we get the conditions for order 2:
b1 + b2 = 1 (consistency)
b2c2 = 1/2
Note All consistent RK methods are convergent!
Second order, 2-stage ERK methods have a Butcher tableau
0 0 0
12b
12b 0
1 − b b
Numerical Methods for Differential Equations – p. 26/63
Example 1. The modified Euler method
Put b = 1 to get
0 0 0
12
12 0
0 1
Y ′
1 = f(tn, yn)
Y ′
2 = f(tn + h/2, yn + hY ′
1/2)
yn+1 = yn + hY ′
2
Second order explicit Runge–Kutta (ERK) method
Numerical Methods for Differential Equations – p. 27/63
Example 2. Heun’s method
Put b = 1/2 to get
0 0 0
1 1 0
1/2 1/2
Y ′
1 = f(tn, yn)
Y ′
2 = f(tn + h, yn + hY ′
1)
yn+1 = yn + h (Y ′
1 + Y ′
2)/2
Second order ERK, compare to the trapezoidal rule!
Numerical Methods for Differential Equations – p. 28/63
Third order 3-stage ERK
Conditions for 3rd order: a21 = c2; a31 + a32 = c3
b1 + b2 + b3 = 1
b2c2 + b3c3 = 1/2
b2c22 + b3c
23 = 1/3
b3a32c2 = 1/6
Classical RK Nyström scheme
0 0 0 0
1/2 1/2 0 0
1 −1 2 0
1/6 2/3 1/6
0 0 0 0
2/3 2/3 0 0
2/3 0 2/3 0
1/4 3/8 3/8
Numerical Methods for Differential Equations – p. 29/63
Exercise
Construct the Butcher tableau for the 3-stage Heun method.
Y ′
1 = f(tn, yn)
Y ′
2 = f(tn + h/3, yn + hY ′
1/3)
Y ′
3 = f(tn + 2h/3, yn + 2hY ′
2/3)
yn+1 = yn + h (Y ′
1 + 3Y ′
3)/4
Is it of order 3?
Numerical Methods for Differential Equations – p. 30/63
Classic RK4: 4th order, 4-stage ERK
The “original” RK method (1895):
Y ′
1 = f(tn, yn)
Y ′
2 = f(tn + h/2, yn + hY ′
1/2)
Y ′
3 = f(tn + h/2, yn + hY ′
2/2)
Y ′
4 = f(tn + h, yn + hY ′
3)
yn+1 = yn +h
6(Y ′
1 + 2Y ′
2 + 2Y ′
3 + Y ′
4).
Numerical Methods for Differential Equations – p. 31/63
Classic 4th order RK4 . . .
Butcher tableau
0
1/2 1/2
1/2 0 1/2
1 0 0 1
1/6 1/3 1/3 1/6
s-stage ERK methods of order p = s exist only for s ≤ 4 (i.e.
there is no 5-stage ERK of order 5)
Numerical Methods for Differential Equations – p. 32/63
Order conditionsAn s-stage ERK method has s + s(s − 1)/2 coefficients to
choose, but the order conditions are many
Numerical Methods for Differential Equations – p. 33/63
Order conditionsAn s-stage ERK method has s + s(s − 1)/2 coefficients to
choose, but the order conditions are many
Number of coefficients
stages s 1 2 3 4 5 6 7 8 9 10
# coefficients 1 3 6 10 15 21 28 36 45 55
Numerical Methods for Differential Equations – p. 33/63
Order conditionsAn s-stage ERK method has s + s(s − 1)/2 coefficients to
choose, but the order conditions are many
Number of coefficients
stages s 1 2 3 4 5 6 7 8 9 10
# coefficients 1 3 6 10 15 21 28 36 45 55
Number of order conditions
order p 1 2 3 4 5 6 7 8 9 10
# conditions 1 2 4 8 17 37 85 200 486 1205
Numerical Methods for Differential Equations – p. 33/63
Order conditionsAn s-stage ERK method has s + s(s − 1)/2 coefficients to
choose, but the order conditions are many
Number of coefficients
stages s 1 2 3 4 5 6 7 8 9 10
# coefficients 1 3 6 10 15 21 28 36 45 55
Number of order conditions
order p 1 2 3 4 5 6 7 8 9 10
# conditions 1 2 4 8 17 37 85 200 486 1205
Maximum order, min stages
order p 1 2 3 4 5 6 7 8
min stages 1 2 3 4 6 7 9 10Numerical Methods for Differential Equations – p. 33/63
6. Embedded RK methods
Two methods in one Butcher tableau (RK34)
Y ′
1 = f(tn, yn)
Y ′
2 = f(tn + h/2, yn + hY ′
1/2)
Y ′
3 = f(tn + h/2, yn + hY ′
2/2)
Z ′
3 = f(tn + h, yn − hY ′
1 + 2hY ′
2)
Y ′
4 = f(tn + h, yn + hY ′
3)
yn+1 = yn +h
6(Y ′
1 + 2Y ′
2 + 2Y ′
3 + Y ′
4) order 4
zn+1 = yn +h
6(Y ′
1 + 4Y ′
2 + Z ′
3) order 3
The difference yn+1 − zn+1 can be used as an error estimate
Numerical Methods for Differential Equations – p. 34/63
Adaptive RK methods
Example Use an embedded pair, e.g. RK34
Local error estimate rn+1 := ‖yn+1 − zn+1‖ = O(h4)
Adjust the step size h so that the local error estimateequals a prescribed tolerance TOL
Simplest step size change scheme
hn+1 =
(
TOL
rn+1
)1/p
hn
makes rn ≈ TOL
Adaptivity using local error control
Numerical Methods for Differential Equations – p. 35/63
7. Implicit Runge–Kutta methods (IRK)
Y1 = yn + h
s∑
j=1
a1,jf(tn + cjh, Yj)
Y2 = yn + hs
∑
j=1
a2,jf(tn + cjh, Yj)
...
Ys = yn + h
s∑
j=1
as,jf(tn + cjh, Yj)
yn+1 = yn + h
s∑
j=1
bjf(tn + cjh, Yj)
Numerical Methods for Differential Equations – p. 36/63
Implicit Runge–Kutta methods . . .
In stage value – stage derivative form
Yi = yn + hs
∑
j=1
ai,jY′
j
Y ′
j = f(tn + cjh, Yj)
yn+1 = yn + hs
∑
j=1
bjY′
j
Numerical Methods for Differential Equations – p. 37/63
1-stage IRK method
Implicit Euler (order 1) Implicit midpoint method (order 2)
1 1
1
1/2 1/2
1
Y ′
1 = f(tn + c1h, yn + h a11Y′
1)
yn+1 = yn + h b1Y′
1
may be written as
yn+1 = yn + h b1f(tn + c1h, (b1 − a11
b1yn +
a11
b1yn+1))
Taylor expansion:
yn+1 = y + h b1f(tn + c1h, y +a11
b1hf +
a11
b1
h2
2(ft + fyf) + O(h2))
= y + h b1hf + h2(b1c1ft + a11fyf) + O(h3)
Numerical Methods for Differential Equations – p. 38/63
Taylor expansions for 1-stage IRK
y(tn+1) = y + h f +h2
2(ft + fyf) + O(h3)
yn+1 = y + h b1f + h2(b1c1ft + a11fyf) + O(h3)
Condition for order 1 (consistency) b1 = 1
Conditions for order 2 c1 = a11 = 1/2
Conclusion Implicit Euler is of order 1 and implicitmidpoint method is the only 1-stage IRK of order 2
Numerical Methods for Differential Equations – p. 39/63
8. The stability function
Applying an IRK to the test equation, we get
hY ′
i = hλ · (yn +s
∑
j=1
ai,jhY′
j ), i = 1, . . . , s
Let hY′ = [hY ′
1 · · · hY ′
s ]T and 1 = [1 1 · · · 1]T ∈ R
s, then(I − hλA)hY′ = hλ1yn so hY′ = hλ(I − hλA)−1
1yn
yn+1 = yn +
s∑
j=1
bjhY′
j = [1 + hλbT(I − hλA)−11]yn
Numerical Methods for Differential Equations – p. 40/63
The stability function
Theorem For every Runge-Kutta method applied to thelinear test equation y′ = λy we have
yn+1 = R(hλ)yn
where the rational function
R(z) = 1 + zbT(I − zA)−11.
If the method is explicit, then R(z) is a polynomial ofdegree s
The function R(z) is called the method’s stability function
Numerical Methods for Differential Equations – p. 41/63
A-stability of RK methods
Definition The method’s stability region is the set
D = {z ∈ C : |R(z)| ≤ 1}
Theorem If R(z) maps all of C− into the unit circle, then
the method is A-stable
Corollary No explicit RK method is A-stable
(For ERK R(z) is a polynomial, and P (z) → ∞ as z → ∞)
Numerical Methods for Differential Equations – p. 42/63
A-stable methods and the Maximum Principle
Theorem |R(z)| ≤ 1 for all z ∈ C− if and only if all the poles of
R have positive real parts and |R(iω)| ≤ 1 for all ω ∈ R
This is the Maximum Principle in complex analysis
Example
0 1/4 −1/4 Y ′
1 = f(yn + hY ′
1/4 − hY ′
2/4)
2/3 1/4 5/12 ⇒ Y ′
2 = f(yn + hY ′
1/4 + 5hY ′
2/12)
1/4 3/4 yn+1 = yn + h(Y ′
1 + 3Y ′
2)/4
Numerical Methods for Differential Equations – p. 43/63
Example . . .
Applied to the test equation, we get
yn+1 =1 + 1
3hλ
1 − 23hλ+ 1
6(hλ)2
yn
with poles 2 ± i√
2 ∈ C+, and
|R(iω)|2 =1 + 1
9ω2
1 + 19ω2 + 1
36ω4
≤ 1
Conclusion |R(z)| ≤ 1 ∀ z ∈ C−. The method is A-stable
Numerical Methods for Differential Equations – p. 44/63
9. Linear Multistep Methods
A multistep method is a method of the type
yn+1 = Φ(f, h, y0, y1, . . . , yn)
using values from several previous steps
◮ Explicit Euler yn+1 = yn + h f(tn, yn)
◮ Trapezoidal rule yn+1 = yn + h ( f(tn,yn)+f(tn+1,yn+1)2
)
◮ Implicit Euler yn+1 = yn + h f(tn+1, yn+1)
are all one-step methods, but also LM methods
Numerical Methods for Differential Equations – p. 45/63
Multistep methods and difference equations
A k-step multistep method replaces the ODE y′ = f(t, y) by a
(finite) difference equation
k∑
j=0
ak−jyn−j = h
k∑
j=0
bk−jf(tn−j , yn−j)
Generating (characteristic) polynomials
ρ(w) =
k∑
m=0
am wm σ(w) =
k∑
m=0
bm wm
Normalization ak = 1 or σ(1) = 1 (choose your preference)
If bk = 0, the method is explicit ; if bk 6= 0, it is implicit
EE: ρ(w) = w − 1, σ(w) = 1; IE: ρ(w) = w − 1, σ(w) = w
Numerical Methods for Differential Equations – p. 46/63
Lagrange interpolation
Given a grid {t0, t1, . . . , tk} construct a degree k polynomial
basis
Φi(t) i = 0, 1, . . . , k
such that Φi(tj) = δij (the Kronecker delta)
If the values zj = z(tj) are known for the function z(t), then
P (t) =
k∑
j=0
Φj(t)zj
interpolates z(t) on the grid:
P (tj) = zj ; P (t) ≈ z(t) for all t
Numerical Methods for Differential Equations – p. 47/63
Adams methods (J.C. Adams, 1880s)
Suppose we have the first n+ k approximations
ym = y(tm), m = 0, 1, . . . , n+ k − 1
Rewrite y′ = f(t, y) by integration
y(tn+k) − y(tn+k−1) =
∫ tn+k
tn+k−1
f(τ, y(τ)) dτ
Adams methods approximate the integrand by aninterpolation polynomial on tn, tn−1, . . .
f(τ, y(τ)) ≈ P (τ)
Numerical Methods for Differential Equations – p. 48/63
Adams–Bashforth methods (explicit)
Interpolate f values with polynomial of degree k − 1:
P (tn+j) = f(tn+j , y(tn+j)) j = 0, . . . , k − 1
Then P (τ) = f(τ, y(τ)) + O(hk) for t ∈ [tn, tn+k]
y(tn+k) = y(tn+k−1) +
∫ tn+k
tn+k−1
P (τ)dτ + O(hk+1)
The k-step Adams-Bashforth method is the order k method
yn+k = yn+k−1 + h
k−1∑
j=0
bj f(tn+j , yn+j)
where bj = h−1∫ tn+k
tn+k−1Φj(τ)dτ
Numerical Methods for Differential Equations – p. 49/63
Coefficients of AB1
AB1: for k = 1
yn+1 = yn + h b0 f(tn, yn)
where
b0 = h−1
∫ tn+1
tn
Φ0(τ)dτ = h−1
∫ tn+1
tn
1dτ = 1 ⇒
yn+1 = yn + h f(tn, yn)
Conclusion AB1 is the explicit Euler method
Numerical Methods for Differential Equations – p. 50/63
Coefficients of AB2
Here
Φ1(t) =t − tn
tn+1 − tn; Φ0(t) =
t − tn+1
tn − tn+1
AB2: for k = 2
yn+2 = yn+1 + h [b1 f(tn+1, yn+1) + b0 f(tn, yn)]
b0 = h−1
∫ tn+2
tn+1
Φ0(τ)dτ = −1/2
b1 = h−1
∫ tn+2
tn+1
Φ1(τ)dτ = 3/2
yn+2 = yn+1 + h
[
3
2f(tn+1, yn+1) −
1
2f(tn, yn)
]
Numerical Methods for Differential Equations – p. 51/63
Initializing an Adams method
The first step of AB2 is
y2 = y1 + h
[
3
2f(t1, y1) −
1
2f(t0, y0)
]
so we need the values of y0 and y1 to start.
y0 is obtained from the initial value, but y1 has to be computed
with a one-step method.
Implementing AB2: we may use AB1 for the first step,
y1 = y0 + h f(t0, y0)
yn+2 = yn+1 + h
[
3
2f(tn+1, yn+1) −
1
2f(tn, yn)
]
, n ≥ 0
Numerical Methods for Differential Equations – p. 52/63
Example: AB1 vs AB2Approximate the solution to y′ = −y2 cos t, y(0) = 1/2, t ∈ [0, 8π]
with h = π/6 and h = π/60
AB1: Solutions Errors
0 10 20 300
0.2
0.4
0.6
0.8
1
0 10 20 3010
−4
10−3
10−2
10−1
100
0 10 20 300.2
0.4
0.6
0.8
1
1.2
0 10 20 3010
−6
10−4
10−2
100
AB2: Solutions ErrorsNumerical Methods for Differential Equations – p. 53/63
How do we check the order of a multistep method?
A multistep method is of order of consistency p if the local error is
k∑
j=0
ajy(tn+j) − h
k∑
j=0
bjy′(tn+j) = O(hp+1)
Try if the formula holds exactly for polynomials
y(t) = 1, t, t2, t3, . . .
Insert y = tm and y′ = mtm−1 into the formula, take tn+j = jh:
k∑
j=0
aj(jh)m − hk
∑
j=0
bjm(jh)m−1 = hmk
∑
j=0
(ajjm − bjmjm−1)
Numerical Methods for Differential Equations – p. 54/63
Order conditions for multistep methods
Theorem A k-step method is of consistency order p if and only
if it satisfies the following conditions:
◮
k∑
j=0
jmaj = m
k∑
j=0
jm−1bj , m = 0, 1, . . . , p
◮
k∑
j=0
jp+1aj 6= (p + 1)k
∑
j=0
jpbj
Then the multistep formula holds exactly for all polynomials of
degree p or less. (Problems with y = P (t) are solved exactly)
Numerical Methods for Differential Equations – p. 55/63
The root condition
Definition A polynomial ρ satisfies the root condition if all of its
zeros have moduli less than or equal to one and the zeros of unit
modulus are simple
Examples
◮ ρ(w) = (w − 1)(w − 0.5)(w + 0.9)
◮ ρ(w) = (w − 1)(w + 1)
◮ ρ(w) = (w − 1)2(w − 0.5)
◮ ρ(w) = (w − 1)(w −√
2)(w +√
2)
◮ ρ(w) = (w − 1)(w2 + 0.25)
All Adams methods have ρ(w) = wk−1(w − 1)
Numerical Methods for Differential Equations – p. 56/63
The Dahlquist equivalence theorem
Definiton A method is zero-stable if ρ satisfies the rootcondition
Theorem A multistep method is convergent if and only if itis zero-stable and consistent of order p ≥ 1 (without proof)
Example k-step Adams-Bashforth methods are of order kand have ρ(w) = wk−1(w − 1) ⇒ they are convergent
Numerical Methods for Differential Equations – p. 57/63
Dahlquist’s first barrier
Theorem The maximal order of a zero-stable k-stepmethod is
p =
k for explicit methods
k + 1 if k is odd
k + 2 if k is evenfor implicit methods
Numerical Methods for Differential Equations – p. 58/63
Backward differentiation formula of order 2
Construct a 2-step method of order 2 of the form
α2yn+2 + α1yn+1 + α0yn = h f(tn+2, yn+2)
Order conditions for p = 2:
α2 + α1 + α0 = 0; 2α2 + α1 = 1; 4α2 + α1 = 4
⇒ α2 = 3/2; α1 = −2; α0 = 1/2;
ρ(w) =3
2(w − 1)(w − 1
3) ⇒ BDF2 is convergent
Numerical Methods for Differential Equations – p. 59/63
Backward differentiation formulas
The backward difference operator is defined as∇0yn+k = yn+k and∇jyn+k = ∇j−1yn+k −∇j−1yn+k−1, j ≥ 1
Theorem (without proof): The k-step BDF method
k∑
j=1
∇j
jyn+k = h f(tn+k, yn+k)
is convergent of order p = k if and only if 1 ≤ k ≤ 6
Note BDF methods are suitable for stiff problems
Numerical Methods for Differential Equations – p. 60/63
BDF1-6 stability regions
The methods are stable outside the indicated area
−10 −5 0 5 10 15 20 25 30 35−20
−15
−10
−5
0
5
10
15
20Stability regions of BDF1−6 methods
Numerical Methods for Differential Equations – p. 61/63
A-stability of multistep methods
Applying a multistep method to the linear test equationy′ = λy produces a difference equation
k∑
j=0
ajyn+j = hλk
∑
j=0
bjyn+j
The characteristic equation (with z := hλ)
ρ(w) − zσ(w) = 0
has k roots wj(z). The method is A-stable iff
Re z ≤ 0 ⇒ |wj(z)| ≤ 1,
with simple unit modulus roots (root condition)
Numerical Methods for Differential Equations – p. 62/63
Dahlquist’s second barrier
Theorem (without proof): The highest order of anA-stable multistep method is p = 2. Of all 2nd orderA-stable multistep methods, the trapezoidal rule has thesmallest error
Note There is no such order restriction for Runge–Kuttamethods, which can be A-stable for arbitrarily high orders
A multistep method can be useful although it isn’t A-stable
Numerical Methods for Differential Equations – p. 63/63