Second Term 05/061 Roots of Equations Open Methods.
-
date post
22-Dec-2015 -
Category
Documents
-
view
214 -
download
1
Transcript of Second Term 05/061 Roots of Equations Open Methods.
Second Term 05/06 1
Roots of Equations
Open Methods
Second Term 05/06 2
The following root finding methods will be introduced:
A. Bracketing MethodsA.1. Bisection MethodA.2. Regula Falsi
B. Open MethodsB.1. Fixed Point IterationB.2. Newton Raphson's MethodB.3. Secant Method
Second Term 05/06 3
B. Open Methods
(a) Bisection method
(b) Open method (diverge)
(c) Open method (converge)
To find the root for f(x) = 0, we construct a magic formulae
xi+1 = g(xi)
to predict the root iteratively until x converge to a root. However, x may diverge!
Second Term 05/06 4
What you should know about Open Methods
How to construct the magic formulae g(x)?
How can we ensure convergence?
What makes a method converges quickly or diverge?
How fast does a method converge?
Second Term 05/06 5
B.1. Fixed Point Iteration• Also known as one-point iteration or
successive substitution
• To find the root for f(x) = 0, we rearrange f(x) = 0 so that there is an x on one side of the equation.
xxgxf )(0)(
• If we can solve g(x) = x, we solve f(x) = 0.• We solve g(x) = x by computing
until xi+1 converges to xi.
given with)( 01 xxgx ii
Second Term 05/06 6
Fixed Point Iteration – Example032)( 2 xxxf
2
3)(
2
332032
2
1
222
iii
xxgx
xxxxxx
Reason: When x converges, i.e. xi+1 xi
032
2
3
2
3
2
22
1
ii
ii
ii
xx
xx
xx
Second Term 05/06 7
ExampleFind root of f(x) = e-x - x = 0.
(Answer: α= 0.56714329)
ixi ex 1putWe
i xi εa (%) εt (%)
0 0 100.0
1 1.000000 100.0 76.3
2 0.367879 171.8 35.1
3 0.692201 46.9 22.1
4 0.500473 38.3 11.8
5 0.606244 17.4 6.89
6 0.545396 11.2 3.83
7 0.579612 5.90 2.20
8 0.560115 3.48 1.24
9 0.571143 1.93 0.705
10 0.564879 1.11 0.399
Second Term 05/06 8
Two Curve Graphical Method
Demo
The point, x, where the two curves,
f1(x) = x and
f2(x) = g(x),
intersect is the solution to f(x) = 0.
Second Term 05/06 9
Fixed Point Iteration
032)( 2 xxxf
2
3)(
2
3
32
032
2
2
2
2
xxg
xx
xx
xx
2
3)(
2
3
03)2(
0322
xxg
xx
xx
xx
32)(
32
32
0322
2
xxg
xx
xx
xx
• There are infinite ways to construct g(x) from f(x).
For example,
So which one is better?
(ans: x = 3 or -1)
Case a: Case b: Case c:
Second Term 05/06 10
32
aCase
1 ii xx
1. x0 = 4
2. x1 = 3.31662
3. x1 = 3.10375
4. x1 = 3.03439
5. x1 = 3.01144
6. x1 = 3.00381
2
3
bCase
1
ii x
x2
3
cCase2
1
ii
xx
1. x0 = 4
2. x1 = 1.5
3. x1 = -6
4. x1 = -0.375
5. x1 = -1.263158
6. x1 = -0.919355
7. -1.02762
8. -0.990876
9. -1.00305
1. x0 = 4
2. x1 = 6.5
3. x1 = 19.625
4. x1 = 191.070
Converge!
Converge, but slower
Diverge!
Second Term 05/06 11
How to choose g(x)?
• Can we know which function g(x) would converge to solution before we do the computation?
Second Term 05/06 12
Convergence of Fixed Point Iteration
By definition
)2(
)1(
11
ii
ii
x
x
Fixed point iteration
)4()(
and
)3()(
1 ii xgx
g
)6()()()5(in)2(Sub
)5()()()4()3(
1
1
ii
ii
xgg
xggx
Second Term 05/06 13
Convergence of Fixed Point Iteration
According to the derivative mean-value theorem, if a g(x) and g'(x) are continuous over an interval xi ≤ x ≤ α, there exists a value x = ξ within the interval such that
)7()()(
)('i
i
x
xggxg
• Therefore, if |g'(x)| < 1, the error decreases with each iteration. If |g'(x)| > 1, the error increase.
• If the derivative is positive, the iterative solution will be monotonic.
• If the derivative is negative, the errors will oscillate.
)()(havewe(6),and(1)From 1 iiii xggandx
iii
i xgxg
)(')(')7(Thus 11
Second Term 05/06 14
Demo
(a) |g'(x)| < 1, g'(x) is +ve converge, monotonic
(b) |g'(x)| < 1, g'(x) is -ve converge, oscillate
(c) |g'(x)| > 1, g'(x) is +ve diverge, monotonic
(d) |g'(x)| > 1, g'(x) is -ve diverge, oscillate
Second Term 05/06 15
Fixed Point Iteration Impl. (as C function)// x0: Initial guess of the root// es: Acceptable relative percentage error// iter_max: Maximum number of iterations alloweddouble FixedPt(double x0, double es, int iter_max) { double xr = x0; // Estimated root double xr_old; // Keep xr from previous iteration int iter = 0; // Keep track of # of iterations
do { xr_old = xr; xr = g(xr_old); // g(x) has to be supplied if (xr != 0) ea = fabs((xr – xr_old) / xr) * 100;
iter++; } while (ea > es && iter < iter_max);
return xr;}
Second Term 05/06 16
The following root finding methods will be introduced:
A. Bracketing MethodsA.1. Bisection MethodA.2. Regula Falsi
B. Open MethodsB.1. Fixed Point IterationB.2. Newton Raphson's MethodB.3. Secant Method
Second Term 05/06 17
B.2. Newton-Raphson Method
Use the slope of f(x) to predict the location of the root.
xi+1 is the point where the tangent at xi intersects x-axis.
)('
)(0)()(' 1
1 i
iii
ii
ii xf
xfxx
xx
xfxf
Second Term 05/06 18
Newton-Raphson Method
What would happen when f '(α) = 0?
For example, f(x) = (x-1)2 = 0
)('
)(1
i
iii xf
xfxx
Second Term 05/06 19
Error Analysis of Newton-Raphson Method
By definition
)2(
)1(
11
ii
ii
x
x
Newton-Raphson method
)3())(('))((')(
))(('))((')(
))((')()('
)(
1
1
1
1
iiiii
iiiii
iiii
i
iii
xxfxxfxf
xxfxxfxf
xxxfxfxf
xfxx
Second Term 05/06 20
Suppose α is the true value (i.e., f(α) = 0).Using Taylor's series
Error Analysis of Newton-Raphson Method
221
21
21
2
2
)('2
)("
)('2
)("
))2(and)1(from()(2
)("))(('0
))3(from()(2
)("))(('0
)(2
)("))((')(0
)(2
)("))((')()(
iii
i
iii
iii
iiii
iiii
f
f
xf
f
fxf
xf
xxf
xf
xxfxf
xf
xxfxff
When xi and α are very close to each other, ξ is between xi and α.
The iterative process is said to be of second order.
Second Term 05/06 21
The Order of Iterative Process (Definition)
Using an iterative process we get xk+1 from xk and other info.
We have x0, x1, x2, …, xk+1 as the estimation for the root α.
Let δk = α – xk
Then we may observe
The process in such a case is said to be of p-th order.• It is called Superlinear if p > 1.• It is called Linear if p = 1.• It is called Sublinear if p < 1.
)(1p
kk O
Second Term 05/06 22
Error of the Newton-Raphson Method
Each error is approximately proportional to the square of the previous error. This means that the number of correct decimal places roughly doubles with each approximation.
Example: Find the root of f(x) = e-x - x = 0
(Ans: α= 0.56714329)
11
i
i
xi
x
ii e
xexx
Error Analysis
56714329.0)("
56714329.11)('
ef
ef
Second Term 05/06 23
Error Analysis
2
2
21
18095.0)56714329.1(2
56714329.0)('2
)("
i
i
ii f
f
i xi εt (%) |δi| estimated |δi+1|
0 0 100 0.56714329 0.0582
1 0.500000000 11.8 0.06714329 0.008158
2 0.566311003 0.147 0.0008323 0.000000125
3 0.567143165 0.0000220 0.000000125 2.83x10-15
4 0.567143290 < 10-8
Second Term 05/06 24
Newton-Raphson vs. Fixed Point Iteration
Find root of f(x) = e-x - x = 0.
(Answer: α= 0.56714329) ixi ex 1
i xi εa (%) εt (%)
0 0 100.0
1 1.000000 100.0 76.3
2 0.367879 171.8 35.1
3 0.692201 46.9 22.1
4 0.500473 38.3 11.8
5 0.606244 17.4 6.89
6 0.545396 11.2 3.83
7 0.579612 5.90 2.20
8 0.560115 3.48 1.24
9 0.571143 1.93 0.705
10 0.564879 1.11 0.399
i xi εt (%) |δi|
0 0 100 0.56714329
1 0.500000000 11.8 0.06714329
2 0.566311003 0.147 0.0008323
3 0.567143165 0.0000220 0.000000125
4 0.567143290 < 10-8
Newton-Raphson
Fixed Point Iteration with
Second Term 05/06 25
Pitfalls of the Newton-Raphson Method
• Sometimes slowiteration x
0 0.5
1 51.65
2 46.485
3 41.8365
4 37.65285
5 33.8877565
… …
Infinity 1.0000000
1)( 10 xxf
Second Term 05/06 26
Pitfalls of the Newton-Raphson Method
Figure (a)
An infection point (f"(x)=0) at the vicinity of a root causes divergence.
Figure (b)
A local maximum or minimum causes oscillations.
Second Term 05/06 27
Pitfalls of the Newton-Raphson Method
Figure (c)
It may jump from one location close to one root to a location that is several roots away.
Figure (d)
A zero slope causes division by zero.
Second Term 05/06 28
Overcoming the Pitfalls?• No general convergence criteria for Newton-
Raphson method.
• Convergence depends on function nature and accuracy of initial guess.– A guess that's close to true root is always a better
choice– Good knowledge of the functions or graphical analysis
can help you make good guesses
• Good software should recognize slow convergence or divergence.– At the end of computation, the final root estimate
should always be substituted into the original function to verify the solution.
Second Term 05/06 29
The following root finding methods will be introduced:
A. Bracketing MethodsA.1. Bisection MethodA.2. Regula Falsi
B. Open MethodsB.1. Fixed Point IterationB.2. Newton Raphson's MethodB.3. Secant Method
Second Term 05/06 30
B.2. Secant MethodNewton-Raphson method needs to compute the derivatives.
The secant method approximate the derivatives by finite divided difference.
)()(
))((
)('
)(
)()()('
1
1
1
1
1
ii
iiii
i
iii
ii
iii
xfxf
xxxfx
xf
xfxx
xx
xfxfxf
)()(
))((
1
11
ii
iiiii xfxf
xxxfxx
From Newton-Raphson
method
Second Term 05/06 31
Secant Method
Second Term 05/06 32
Secant Method – ExampleFind root of f(x) = e-x - x = 0 with initial estimate of
x-1 = 0 and x0 = 1.0. (Answer: α= 0.56714329)
)()(
))((
1
11
ii
iiiii xfxf
xxxfxx
i xi-1 xi f(xi-1) f(xi) xi+1 εt
0 0 1 1.00000 -0.63212 0.61270 8.0 %
1 1 0.61270 -0.63212 -0.07081 0.56384 0.58 %
2 0.61270 0.56384 -0.07081 0.00518 0.56717 0.0048 %
Again, compare this results obtained by the Newton-Raphson method and simple fixed point iteration method.
Second Term 05/06 33
Comparison of the Secant and False-position method
• Both methods use the same expression to compute xr.
• They have different methods for the replacement of the initial values by the new estimate. (see next page)
)()(
))((:positionFalse
)()(
))((:Secant
1
11
ul
uluur
ii
iiiii
xfxf
xxxfxx
xfxf
xxxfxx
Second Term 05/06 34
Comparison of the Secant and False-position method
Second Term 05/06 35
Comparison of the Secant and False-position method
xef(x) x
Second Term 05/06 36
Modified Secant Method
• Replace xi-1 - xi by δxi and approximate f'(x) as
• From Newton-Raphson method,
i
iiii x
xfxxfxf
)()(
)('
)()(
)(
)('
)(
1
1
iii
iiii
i
iii
xfxxf
xfxxx
xf
xfxx
• Needs only one instead of two initial guess points
Second Term 05/06 37
Modified Secant Method
i xi-1 xi f(xi-1) f(xi) xi+1 εt
0 0 1 1.00000 -0.63212 0.61270 8.0 %
1 1 0.61270 -0.63212 -0.07081 0.56384 0.58 %
2 0.61270 0.56384 -0.07081 0.00518 0.56717 0.0048 %
i xi xi+δxi f(xi) f(xi+δxi) xi+1
0 1 1.01 -0.63212 -0.64578 0.537263
1 0.537263 0.542635 0.047083 0.038579 0.56701
2 0.56701 0.567143 0.000209 -0.00867 0.567143
Find root of f(x) = e-x - x = 0 with initial estimate of
x0 = 1.0 and δ=0.01. (Answer: α= 0.56714329)
Compared with the Secant method
Second Term 05/06 38
Modified Secant Method – About δ
If δ is too small, the method can be swamped by round-off error caused by subtractive cancellation in the denominator of
If δ is too big, this technique can become inefficient and even divergent.
If δ is selected properly, this method provides a good alternative for cases when developing two initial guess is inconvenient.
)()(
)(1
iii
iiii xfxxf
xfxxx
Second Term 05/06 39
The following root finding methods will be introduced:
A. Bracketing MethodsA.1. Bisection MethodA.2. Regula Falsi
B. Open MethodsB.1. Fixed Point IterationB.2. Newton Raphson's MethodB.3. Secant Method
Can they handle multiple roots?
Second Term 05/06 40
Multiple Roots
• A multiple root corresponds to a point where a function is tangent to the x axis.
• For example, this function has a double root.
f(x) = (x – 3)(x – 1)(x – 1)
= x3 – 5x2 + 7x - 3
• For example, this function has a triple root.
f(x) = (x – 3)(x – 1)(x – 1) (x – 1)
= x4 – 6x3 +12x2 - 10x + 3
Second Term 05/06 41
Multiple Roots
• Odd multiple roots cross the axis. (Figure (b))
• Even multiple roots do not cross the axis. (Figure (a) and (c))
Second Term 05/06 42
Difficulties when we have multiple roots
• Bracketing methods do not work for even multiple roots.
• f(α) = f'(α) = 0, so both f(xi) and f'(xi) approach zero near the root. This could result in division by zero. A zero check for f(x) should be incorporated so that the computation stops before f'(x) reaches zero.
• For multiple roots, Newton-Raphson and Secant methods converge linearly, rather than quadratic convergence.
Second Term 05/06 43
Modified Newton-Raphson Methods for Multiple Roots
• Suggested Solution 1:
)('
)()(')(
)()('
~)(
~.rootsingleaisand0)(
~roottheoftymultiplicitheis,
~Define
11
/1
1
/1
1
i
ii
iim
im
i
i
iii
m
xf
xfmx
xfxf
xfx
xf
xfxx
f
mff
m
Disadvantage:
work only when m is known.
Second Term 05/06 44
Modified Newton-Raphson Methods for Multiple Roots
• Suggested Solution 2:
)(")()]('[
)(')()2(into)3(and)1(Sub
)3()]('[
)(")()(')(')('
~)1(ateDifferenti
)2()('
~)(
~).( as locations same theallat roots has)(
~
)1()('
)()(
~Define
21
2
1
ii
iiii
i
i
iii
xfxfxf
xfxfxx
xf
xfxfxfxfxf
xf
xfxx
xfxfxf
xfxf
Second Term 05/06 45
Example of the Modified Newton-Raphson Method for
Multiple Roots• Original Newton Raphson method
7103
375
)('
)(
375
)1)(1)(3()(
2
23
1
23
xx
xxxx
xf
xfxx
xxx
xxxxf
i
ii
i xi εt (%)
0 0 100
1 0.4285714 57
2 0.6857143 31
3 0.8328654 17
4 0.9133290 8.7
5 0.9557833 4.4
6 0.9776551 2.2
The method is linearly convergent toward the true value of 1.0.
Second Term 05/06 46
Example of the Modified Newton-Raphson Method for
Multiple Roots• For the modified algorithm
)106)(375()7103(
)7103)(375(
)(")()]('[
)(')(
375
)1)(1)(3()(
232
223
21
23
iiiiii
iiiiii
ii
iiii
xxxxxx
xxxxxx
xfxfxf
xfxfxx
xxx
xxxxf
i xi εt (%)
0 0 100
1 1.105263 11
2 1.003082 0.31
3 1.000002 0.00024
Second Term 05/06 47
• How about their performance on finding the single root?
i Standard εt (%) Modified εt (%)
0 4 33 4 33
1 3.4 13 2.636364 12
2 3.1 3.3 2.820225 6.0
3 3.008696 0.29 2.961728 1.3
4 3.000075 0.0025 2.998479 0.05
5 3.000000 2x10-7 2.999998 7.7x10-5
Example of the Modified Newton-Raphson Method for
Multiple Roots
Second Term 05/06 48
• What's the disadvantage of the modified Newton-Raphson Methods for multiple roots over the original Newton-Raphson method?
• Note that the Secant method can also be modified in a similar fashion for multiple roots.
Modified Newton-Raphson Methods for Multiple Roots
Second Term 05/06 49
Summary of Open Methods• Unlike bracketing methods, open methods do
not always converge.
• Open methods, if converge, usually converge more quickly than bracketing methods.
• Open methods can locate even multiple roots whereas bracketing methods cannot. (why?)
Second Term 05/06 50
Study Objectives• Understand the graphical interpretation of a root• Understand the differences between bracketing
methods and open methods for root location• Understand the concept of convergence and
divergence• Know why bracketing methods always converge,
whereas open methods may sometimes diverge• Realize that convergence of open methods is
more likely if the initial guess is close to the true root.
Second Term 05/06 51
Study Objectives• Understand what conditions make a method
converge quickly or diverge• Understand the concepts of linear and quadratic
convergence and their implications for the efficiencies of the fixed-point-iteration and Newton-Raphson methods
• Know the fundamental difference between the false-position and secant methods and how it relates to convergence
• Understand the problems posed by multiple roots and the modifications available to mitigate them
Second Term 05/06 52
Analysis of Convergent Rate
)1(...)(!2
)("))((')()(
...)(!2
)("))((')()(
2
2
ii
iii
ii
iii
xxg
xxgxgg
xxg
xxgxgg
Suppose g(x) converges to the solutionα, then the Taylor series of g(α) about xi can be expressed as
By definition
iiiiii xxxgx ,),(,)g( 111
)2(!2
)(")('
!2
)(")('
21
21
ii
iii
ii
iii
xgxg
xgxgx
Thus (1) becomes
Second Term 05/06 53
Analysis of Convergent Rate
When xi is very close to the solution, we can rewrite (2) as
Suppose g(n) exists and the nth term is the first non-zero term, then
Thus to analyze the convergent rate, we can find the smallest n such that g(n)(α) ≠ 0.
3
)3(2
1 !3
)(
!2
)(")(' iiii
ggg
ni
n
i n
g !
)()(
1