1 Open Methods. 2 Numerical methods Direct methods Solve a numerical problem by a finite sequence...
-
Upload
abel-casey -
Category
Documents
-
view
223 -
download
1
Transcript of 1 Open Methods. 2 Numerical methods Direct methods Solve a numerical problem by a finite sequence...
2
Numerical methods
Direct methods Solve a numerical problem by a finite sequence of
operations In absence of round off errors deliver an exact solution;
e.g., solving a linear system Ax = b by Gaussian elimination
Iterative methods Solve a numerical problem (e.g., finding the root of a
system of equations) by finding successive approximations to the solution from an initial guess
Stopping criteria: the relative error
is smaller than a pre-specified value
a x i1 x ix i1
100%
3
Numerical methods
Iterative methods 何時使用?
• The only alternative for non-linear systems of equations
• Often useful even for linear problems involving a large number of variables where direct methods would be prohibitively expensive or impossible
Convergence of a numerical methods If successive approximations lead to increasingly smaller
relative error Opposite to divergent
Iterative Methods for Finding the Roots
Bracketing methods Open methods
Require only a single starting value or two starting values that do not necessarily bracket a root
May diverge as the computation progresses, but when they do converge, they usually do so much faster than bracketing methods
4
Bracketing vs OpenConvergence vs Divergence
a) Bracketing method Start with an interval
Open method Start with a single initial
guess
b) Diverging open method c) Converging open
method - note speed!
5
Simple Fixed-Point Iteration (簡單固定點迭代法 )
Rewrite the function f(x)=0 so that x is on the left-hand side of the equation x=g(x)
Use iteration xi+1=g(xi) to find a value that reaches convergence Use the new function g to predict a new value of x Approximate error
Graphically the root is at the intersection of two curves: y1(x) = g(x)
y2(x) = x
6
a x i1 x ix i1
100%
xxsinx 0xsin2
3xx 03x2x
22
例
Example
Solve f(x)=e-x-x Re-write as: x=g(x) x=e-x
Start with an initial guess (0)
Continue until some toleranceis reached
7
i xi |a| % |t| % |t|i/|t|i-1
0 0.0000 100.000
1 1.0000 100.000 76.322 0.763
2 0.3679 171.828 35.135 0.460
3 0.6922 46.854 22.050 0.628
4 0.5005 38.309 11.755 0.533
More on Convergence
Solution is at the intersectionof the two curves. Identify thepoint on y2 corresponding to the initial guess; the nextguess corresponds to thevalue of the argument x where y1(x) = y2(x)
Convergence requires that thederivative of g(x) near the roothas a magnitude < 1(a) Convergent, 0 ≤ g’ < 1
(b) Convergent, -1 <g’ ≤ 0
(c) Divergent, g’ > 1
(d) Divergent, g’ < -18
Steps of Fixed-Point Iteration
x = g(x), f(x) = x - g(x) = 0
Step 1: Guess x0 and calculate y0 = g(x0)
Step 2: Let x1 = g(x0)
Step 3: Examine if x1 is the solution of f(x) = 0
Step 4. If not, repeat the iteration, x0 = x1
9
Exercise
Use simple fixed-point iteration to locate the root of
Use an initial guess of x0 = 0.5
10
xxxf sin2)(
Newton-Raphson Method (牛頓 -拉夫生法 )
Express values of the function and its derivative at xi
Graphically draw the tangent line to the f(x) curve at someguess x, then follow the tangent line to where it crosses the x-axis
11
f '(x i) f (x i) 0
x i x i1
x i1 x i f (x i)
f '(x i)
At the root, f(xi+1) = 0
Newton-Raphson Method: Example
root
x*
04x3xxf 4 )(
False position - secant lineNewton’s method - tangent line
xixi+1
Newton-Raphson Method
Step 1: Start at the point (x1, f(x1))
Step 2: The intersection of the tangent of f(x) at this point and the x-axis
x2 = x1 - f(x1)/f’(x1)
Step 3: Examine if f(x2) = 0 or |x2 - x1| < tolerance
Step 4: If yes, solution xr = x2
If not, x1 ← x2, repeat the iteration
13
Newton’s Method
Note that an evaluation of the derivative (slope) is required You may have to do this numerically
Open method – Convergence depends on the initial guess (not guaranteed) However, Newton method can converge very quickly
(quadratic convergence)
14
Bungee Jumper Problem
Use the Newton-Raphson method Need to evaluate the function and its derivative
16
tm
gcht
m2
gt
m
gc
mc
g
2
1
dt
mdf
tvtm
gc
c
mgmf
d2d
d
d
d
sectanh)(
)(tanh)(
Given cd = 0.25 kg/m, v = 36 m/s, t = 4 s, and g = 9.81 m2/s, determine the mass of the bungee jumper
dv
dtg
cdmv2
>> y=inline('sqrt(9.81*m/0.25)*tanh(sqrt(9.81*0.25/m)*4)-36','m')
y =
Inline function:
y(m) = sqrt(9.81*m/0.25)*tanh(sqrt(9.81*0.25/m)*4)-36
>> dy=inline('1/2*sqrt(9.81/(m*0.25))*tanh(sqrt(9.81*0.25/m)*4)-9.81/(2*m)*4*sech(sqrt(9.81*0.25/m)*4)^2','m')
dy =
Inline function:
dy(m) = 1/2*sqrt(9.81/(m*0.25))*tanh(sqrt(9.81*0.25/m)*4)-9.81/(2*m)*4*sech(sqrt(9.81*0.25/m)*4)^2
>> format short; root = newtraph(y,dy,140,0.00001)
root =
142.7376
Bungee Jumper Problem
tm
gcht
m2
gt
m
gc
mc
g
2
1
dt
mdf
tvtm
gc
c
mgmf
d2d
d
d
d
sectanh)(
)(tanh)(
Multiple Roots (重根 )
A multiple root (double, triple, etc.) occurs where the function is tangent (正切 ) to the x axis
18
Multiple Roots: Problems
Problems with multiple roots The function does not change sign at even multiple roots
(i.e., m = 2, 4, 6, …) f’(x) goes to zero - need to put a zero check for f(x) in
program Slower convergence (linear instead of quadratic) of
Newton-Raphson and secant methods for multiple roots
20
Modified Newton-Raphson Method
When the multiplicity of the root is known
Double root: m = 2 Triple root: m = 3 Simple but need to know the multiplicity m Maintain quadratic convergence
21
)x(f
)x(fmxx
i
ii1i
Multiple Root with Multiplicity mf(x)=x5 11x4 + 46x3 90x2 + 81x 27
22
three roots
double root
Multiplicity m
m = 1: single root
m = 2: double root
m = 3: triple root
m=1: single root
m=2: double root
m=3: triple root
etc.
Can be used for both single and multiple roots
(m = 1: original Newton’s method)
» multiple1('multi_func','multi_dfunc');enter multiplicity of the root = 1enter initial guess x1 = 1.3allowable tolerance tol = 1.e-6maximum number of iterations max = 100Newton method has converged step x y 1 1.30000000000000 -0.442170000000004 2 1.09600000000000 -0.063612622209021 3 1.04407272727272 -0.014534428477418 4 1.02126549372889 -0.003503591972482 5 1.01045853297516 -0.000861391389428 6 1.00518770530932 -0.000213627276750 7 1.00258369467652 -0.000053197123947 8 1.00128933592285 -0.000013273393044 9 1.00064404356011 -0.000003315132176 10 1.00032186610620 -0.000000828382262 11 1.00016089418619 -0.000000207045531 12 1.00008043738571 -0.000000051755151 13 1.00004021625682 -0.000000012938003 14 1.00002010751461 -0.000000003234405 15 1.00001005358967 -0.000000000808605 16 1.00000502663502 -0.000000000202135 17 1.00000251330500 -0.000000000050527 18 1.00000125681753 -0.000000000012626 19 1.00000062892307 -0.000000000003162
» multiple1('multi_func','multi_dfunc');enter multiplicity of the root = 2enter initial guess x1 = 1.3allowable tolerance tol = 1.e-6maximum number of iterations max = 100Newton method has converged step x y 1 1.30000000000000 -0.442170000000004 2 0.89199999999999 -0.109259530656779 3 0.99229251101321 -0.000480758689392 4 0.99995587111371 -0.000000015579900 5 0.99999999853944 -0.000000000000007 6 1.00000060664549 -0.000000000002935
Original Newton’s method m = 1
Modified Newton’s Method m = 2
Double root: m = 2
f(x) = x5 11x4 + 46x3 90x2 + 81x 27 = 0
Remarks: Newton-Raphson Method
Although Newton-Raphson converges rapidly, it may diverge and fail to find roots if an inflection point (f’’=0) is near the root if there is a local minimum or maximum (f’=0) if there are multiple roots if a zero slope is reached
25
Open method, convergence not guaranteed
Newton-Raphson Method
Examples of poor convergence
Pro: Error of the (i+1)th iteration is roughly proportional to the square of the error of the ith iteration - this is called quadratic convergence (二次收斂 )
Con: Some functions show slow or poor convergence26
Secant Method
Formula for the secant method
Similar to the false position method in form (c.f. 書 5.7 式 ) Still requires two initial estimates But it does not bracket the root at all times - there is no
sign test
28
)()(
))((
i1i
i1iii1i xfxf
xxxfxx
Secant Method演算法
Open Method1. Begin with any two endpoints [a, b] = [x0 , x1]
2. Calculate x2 using the secant method formula
3. Replace x0 with x1, replace x1 with x2 and repeat from (2) until convergence is reached
Use the two most recently generated points in subsequent iterations (not a bracket method!)
30
)()(
))((
i1i
i1iii1i xfxf
xxxfxx
Exercise
Use the secant method to estimate the root of f(x) = e-x – x. Start with the estimates xi-1 = 0 and x0 = 1.0.
31
Secant Method優點、缺點
Advantage Can converge even faster and it does not need to bracket
the root
Disadvantage It is not guaranteed to converge! It may diverge (fail to yield an answer)
32
Convergence not Guaranteed
33-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
-1 0 1 2 3 4 5 6 7
ln(x)
secant
no sign check,
may not bracket the root
y = ln x
» [x1 f1]=secant('my_func',0,1,1.e-15,100);
secant method has converged step x f 1.0000 0 1.0000 2.0000 1.0000 -1.0000 3.0000 0.5000 -0.3750 4.0000 0.2000 0.4080 5.0000 0.3563 -0.0237 6.0000 0.3477 -0.0011 7.0000 0.3473 0.0000 8.0000 0.3473 0.0000 9.0000 0.3473 0.0000 10.0000 0.3473 0.0000
» [x2 f2]=false_position('my_func',0,1,1.e-15,100);
false_position method has converged step xl xu x f 1.0000 0 1.0000 0.5000 -0.3750 2.0000 0 0.5000 0.3636 -0.0428 3.0000 0 0.3636 0.3487 -0.0037 4.0000 0 0.3487 0.3474 -0.0003 5.0000 0 0.3474 0.3473 0.0000 6.0000 0 0.3473 0.3473 0.0000 7.0000 0 0.3473 0.3473 0.0000 8.0000 0 0.3473 0.3473 0.0000 9.0000 0 0.3473 0.3473 0.0000 10.0000 0 0.3473 0.3473 0.0000 11.0000 0 0.3473 0.3473 0.0000 12.0000 0 0.3473 0.3473 0.0000 13.0000 0 0.3473 0.3473 0.0000 14.0000 0 0.3473 0.3473 0.0000 15.0000 0 0.3473 0.3473 0.0000 16.0000 0 0.3473 0.3473 0.0000
01x3xxf 3 )(
Secant method False position method
Secant method may converge even faster and does not need to bracket the root
Secant method
False position
BisectionFalse position
Secant
Newton’s
Convergence criterion 10 -14
Bisection -- 47 iterations
False position -- 15 iterations
Secant -- 10 iterations
Newton’s -- 6 iterations
Modified Secant Method
Use fractional perturbation (分式擾動 ) instead of two arbitrary values to estimate the derivative
is a small perturbation fraction (e.g., xi/xi = 106)
37
i
iiii x
xfxxfxf
)()(
)(
)()(
)(
iii
iii1i xfxxf
xfxxx
MATLAB Function: fzero
Bracketing methods – reliable but slow Open methods – fast but possibly unreliable MATLAB fzero – fast and reliable
Find real root of an equation (not suitable for double root!)
38
fzero(function, x0)
fzero(function, [x0 x1])
>> root=fzero('multi_func',-10)root = 2.99997215011186>> root=fzero('multi_func',1000)root = 2.99996892915965>> root=fzero('multi_func',[-1000 1000])root = 2.99998852581534>> root=fzero('multi_func',[-2 2])??? Error using ==> fzeroThe function values at the interval endpoints must differ in sign.
fzero unable to find the double root off(x) = x5 11x4 + 46x3 90x2 + 81x 27 = 0
function f = multi_func(x)% Exact solutions: x = 1 (double) and 2 (triple)f = x.^5 - 11*x.^4 + 46*x.^3 - 90*x.^2 + 81*x - 27;
Root of Polynomials
Bisection, false-position, Newton-Raphson, secant methods cannot be easily used to determine all roots of higher-order polynomials
Muller’s method (Chapra and Canale, 2002) Bairstow method (Chapra and Canale, 2002)
MATLAB function: roots
40
Muller’s Method
42
y(x)
x
Secant line
x1 x2x3
Parabola
Fit a parabola (quadratic) to exact curve Find both real and complex roots (x2 + rx + s = 0)
MATLAB Function: roots
Recast the root evaluation task as an eigenvalue problem (Chapter 20)
Zeros of a nth-order polynomial
43
r = roots(c) - roots
c = poly(r) - inverse function
0121nn
012
21n
1nn
n
cccccc vector tcoefficien
cxcxcxcxcxp
,,,,,
)(
Roots of Polynomial
Consider the 6th-order polynomial
44
i32 3 2 2 1x
0156x56x111x10x14x6xxf
r
23456
,,,,
)(
>> c = [1 -6 14 10 -111 56 156];>> r = roots(c)r = 2.0000 + 3.0000i 2.0000 - 3.0000i 3.0000 2.0000 -2.0000 -1.0000 >> polyval(c, r), format long gans = 1.36424205265939e-012 + 4.50484094471904e-012i 1.36424205265939e-012 - 4.50484094471904e-012i -1.30739863379858e-012 1.4210854715202e-013 7.105427357601e-013 5.6843418860808e-014