Euler’s method, Runge Kutta methods, Predictor-Corrector methods

17

Click here to load reader

description

Euler’s method, Runge Kutta methods, Predictor-Corrector methods

Transcript of Euler’s method, Runge Kutta methods, Predictor-Corrector methods

Page 1: Euler’s method, Runge Kutta methods, Predictor-Corrector methods

Numerical Methods Introduction

This chapter attempts to describe a few methods for finding approximate values of a solution of a non-linear initial value problems. By no means, it is not an exhaustive study but can be looked upon as an introduction to numerical methods. Again, we stress that no attempt is made towards a deeper analysis. A question may arise: why one needs numerical methods for differential equations. Probably because, differential equations play an important role in many problems of engineering and science. This is so because the differential equations arise in mathematical modelling of many physical problems. The use of the numerical methods have become vital in the absence of explicit solutions. Normally, numerical methods have two major roles:

1. the amicability of the method for easy implementation on a computer; 2. the method allows us to deal with the analysis of error estimates.

In this chapter, we do not enter into the aspect of error analysis. For the present, we deal with some of the numerical methods to find approximate value of the solution of initial value problems (IVP's) on finite intervals. We also mention that no effort is made to study the boundary value problems. Let us consider an initial value problem

(14.1.1)

where and are prescribed real numbers and . By Picard's Theorem 7.6.8, we know that under certain conditions, the initial value problem ( ) has a

unique solution. Our aim is to determine on the interval . Let us also assume that our aim

is to determine at . To do so, we divide the interval into equal parts

we know the value of at . with this data and with the use of ( ), we determine the

approximate value of at , denoted by . We repeat this process till we reach the point

. The details are given in the ensuing sections.

The number is called the step size and depends on and . For , the value of

at is and its approximate value is denoted by . In this chapter, we develop

mywbut.com

1

Page 2: Euler’s method, Runge Kutta methods, Predictor-Corrector methods

methods for determining an approximate value of which is denoted by . This is achieved

by evaluating the approximate values . Normally , is called the INITIAL

VALUE, is called the FIRST STEP and are called the

steps. The approximate value is called the approximation of the function at the step.

The method which uses to determine is called a SINGLE STEP METHOD. If the method

uses more than one value of is called a MULTI-STEP METHOD.

In the sequel, we deal with some simple single step methods to find the approximate value of the solution of ( ). In these methods, our stress is on the use of computers for numerical evaluation. In other words, the implementation of the methods by using computers is one of our present aims. Each method will be pictorially represented by what is called a flow chart. Flow chart is a pictorial representation which shows the sequence of each step of the computation. Usually the details of both the input and the termination of the method is indicated in the flow chart. In short, a flow chart is a chart that shows the flow of the computation including the start and the termination of the method.

Euler's Method

For a small step size , the derivative is close enough to the ratio . In the Euler's method, such an approximation is attempted. To recall, we consider the problem ( ).

Let be the step size and let with . Let be the

approximate value of at . We define

(14.1.2)

The method of determination of by ( ) is called the EULER'S METHOD. For convenience, the value

is denoted by , for .

mywbut.com

2

Page 3: Euler’s method, Runge Kutta methods, Predictor-Corrector methods

Remark 14.1.1

1. Euler's method is an one-step method. 2. The Euler's method has a few motivations.

1. The derivative at can be approximated by if is sufficiently small. Using such an approximation in ( ), we have

2. We can also look at ( ) from the following point of view. The integration of ( ) yields

The integral on the right hand side is approximated, for sufficiently small value of

, by

3. Moreover, if is differentiable sufficient number of times, we can also arrive at ( ) by considering the Taylor's expansion

and neglecting terms that contain powers of that are greater than or equal to .

We illustrate the Euler's method with an example. The example is only for illustration. In ( ), we do not need numerical computation at each step as we know the exact value of the solution. The purpose of the example is to have a feeling for the behaviour of the error and its estimate. It will be more transparent to look at the percentage of error. It may throw more light on the propagation of error.

mywbut.com

3

Page 4: Euler’s method, Runge Kutta methods, Predictor-Corrector methods

EXAMPLE 14.1.2 Use Euler's algorithm to find an approximate value of , where is the solution of the IVP

(14.1.3)

with step sizes and Show that the exact solution of the IVP is . Calculate the error at each step and tabulate the results. Solution: Comparing the given IVP with ( ), we note that

The Euler's algorithm now reads as

(14.1.4)

It is left as an exercise to verify that is a solution of the given IVP. So, the absolute

value of the error at the step is

See the Tables and for the calculation of errors (up to places of decimal).

Table

Initial Initial Step size Approx Exact Error

1.00000 0.10000

mywbut.com

4

Page 5: Euler’s method, Runge Kutta methods, Predictor-Corrector methods

1.00000 0.10000 1.10000 1.11111

0.20000 1.10000 0.10000 1.22100 1.25000 0.02900

0.30000 1.22100 0.10000 1.37008 1.42857 0.05849

0.40000 1.37008 0.10000 1.55780 1.66667 0.10887

0.50000 1.55780 0.10000 1.80047 2.00000 0.19953

Table

Initial Initial Step size Approx Exact Error

1.00000 0.05000

1.00000 0.05000 1.05000 1.05263

0.10000 1.05000 0.05000 1.10513 1.11111 0.00599

0.15000 1.10513 0.05000 1.16619 1.17647 0.01028

0.20000 1.16619 0.05000 1.23419 1.25000 0.01581

0.25000 1.23419 0.05000 1.31035 1.33333 0.02298

0.30000 1.31035 0.05000 1.39620 1.42857 0.03237

0.35000 1.39620 0.05000 1.49367 1.53846 0.04479

0.40000 1.49367 0.05000 1.60522 1.66667 0.06144

0.45000 1.60522 0.05000 1.73406 1.81818 0.08412

mywbut.com

5

Page 6: Euler’s method, Runge Kutta methods, Predictor-Corrector methods

0.50000 1.73406 0.05000 1.88441 2.00000 0.11559

Remark 14.1.3 At step , the approximation of the derivative induces a certain error. At the next step, the approximation of the derivative and the approximate value from the previous step can increase the error. This is what we see from the last column of the table. As mentioned earlier, the percentage of the error perhaps gives a better view of the accumulation of the error. With this in mind, we are tempted to have a look at the error estimates which is the topic of the ensuing section.

We now give a sample flow chart for the Euler's method.

Figure: Flow Chart for Euler's method

Error Estimates and Convergence When we develop a method, specially to get approximate values of a certain quantity, it is desirable to know how much of deviation we have made from the actual to the approximated value. This difference between the exact and the computed value is usually known as the error committed. An estimate for error also indicates how good is the calculated approximate value. In general, such a feature may not be possible. Euler's method is one such method which allows us

mywbut.com

6

Page 7: Euler’s method, Runge Kutta methods, Predictor-Corrector methods

for an analysis of the error, which is the main aim of this section. Note that we are not dealing with the truncation error in actual calculation. Recall that

1. and

2. is the approximate solution of the IVP defined by the Euler's method, namely

(14.2.5)

3.

Note here that is the approximate value of for . The quantity

is the absolute deviation of from for . So, the 's are called the absolute error estimates, committed at the step. It is also called the discretization

error. In this section, we examine the nature of . It is desirable that is ``small" when the step size is small. In this connection, we have the following result. THEOREM 14.2.1 Consider the IVP

(14.2.6)

Let the exact solution of ( ) be a twice continuously differentiable function on and

for , let be the approximate solution of ( ) at . Further, suppose that there

exist positive constants and such that for all

Then

(14.2.7)

mywbut.com

7

Page 8: Euler’s method, Runge Kutta methods, Predictor-Corrector methods

Proof. By the mean value theorem, for each

for some satisfying . Also, by Euler's

method, . So,

(14.2.8)

Again, by the mean value theorem,

for some constant lying

between and . Therefore, using the given bounds in the statement of the theorem, the

error bound (see ( )) reduces to

(14.2.9)

Using the principle of mathematical induction, it can be easily verified that if is the solution

of the difference equation then

1. for and

2. where .

Hence, the proof of the theorem is complete. height6pt width 6pt depth 0pt

Remark 14.2.2 Inequality ( ) implies that the error is in the class . Theorem also

gives an upper bound for the estimate of the error at . Since the error at one step may induce

more error at the next step, the final value may have a cumulative error that is added in each

step. The error at is called a ``local error"; the cumulative error at is called the ``global error". Theorem does not throw any light on the estimate of global error.

mywbut.com

8

Page 9: Euler’s method, Runge Kutta methods, Predictor-Corrector methods

Runge-Kutta Method Runge-Kutta Method is a more general and improvised method as compared to that of the Euler's method. It uses, as we shall see, Taylor's expansion of a ``smooth function" (thereby, we mean that the derivatives exist and are continuous up to certain desired order). Before we proceed further, the following questions may arise in our mind, which has not found place in our discussion so far.

1. How does one choose the starting values, sometimes called starters that are required for implementing an algorithm?

2. Is it desirable to change the step size (or the length of the interval) during the computation if the error estimates demands a change as a function of ?

For the present, the discussion about Question is not taken up. We try to look more on Question in the ensuing discussion. There are many self-starter methods, like the Euler method which uses the initial condition. But these methods are normally not very efficient since the error bounds may not be ``good enough". We have seen in Theorem that the local error (neglecting

the rounding-off error) is in the Euler's algorithm. This shows that as the values of

become smaller, the approximations improve. Moreover, the error of order , may not be sufficiently accurate for many problems. So, we look into a few methods where the error is of higher order. They are Runge-Kutta (in short R-K) methods. Let us analyze how the algorithm is reacher before we actually state it. To do so, we consider the IVP

Define with and . We now assume

that and are smooth. Using Taylor's series, we now have

(14.3.10)

For consider the expression

(14.3.11)

mywbut.com

9

Page 10: Euler’s method, Runge Kutta methods, Predictor-Corrector methods

and are constants. When , ( ) reduces to the Euler's algorithm. We choose

and so that the local truncation error is . From the definition of , we have

where denote the partial derivatives of with respect to respectively. Substituting these values in ( ), we have

(14.3.12)

A comparision of ( ) and ( ), leads to the choice of

(14.3.13)

in order that the powers of up to match (in some sense) in the approximate values of .

Here we note that . So, we choose and so that ( ) is satisfied. One of the simplest solution is

Thus we are lead to define

(14.3.14)

Evaluation of by ( ) is called the Runge-Kutta method of order (R-K method of order ).

A few things are worthwhile to be noted in the above discussion. Firstly, we need the existence

of partial derivatives of up to order for R-K method of order . For higher order methods,

we need to be more smooth. Secondly, we note that the local truncation error (in R-K method

of order ) is of order . Again, we remind the readers that the round-off error in the case

mywbut.com

10

Page 11: Euler’s method, Runge Kutta methods, Predictor-Corrector methods

of implementation has not been considered. Also, in ( ), the partial derivatives of do not appear. In short, we are likely to get a better accuracy in Runge-Kutta method of order in comparision with the Euler's method. Formally, we state the Runge-Kutta method of order .

Algorithm: Runge-Kutta Method of Order

For the IVP ( ) or ( ), let where is a given prescribed step

size. For , define by

Then the flow chart associated with the R-K method of order is

Figure: Flow-Chart of Runge-Kutta method of order

EXAMPLE 14.3.1 Use the Ringe-Kutta method to find the approximate value of where is the solution of the IVP

mywbut.com

11

Page 12: Euler’s method, Runge Kutta methods, Predictor-Corrector methods

with step sizes and Also, calculate the error and tabulate the results.

Solution: Comparing the given IVP with ( ), we note that and .

We now calculate the values of and from the R-K method of order and use the formula

to calculate the approximate values. The results are shown in Tables and .

Table

Initial Initial Step size Approx Exact Error

1.00000 0.10000 1.00000 1.00000 0.00000 0.1 0.121

1.00000 0.10000 1.11050 1.11111

0.1 0.146531

0.20000 1.11050 0.10000 1.23377 1.25000 0.01623 0.12332 0.18417

0.30000 1.23377 0.10000 1.338751 1.42857 0.04106 0.15222 0.23708

0.40000 1.38751 0.10000 1.58216 1.66667 0.08451 0.19252 0.31495

0.50000 1.58216 0.10000 1.83589 2.00000 0.16411 0.25032 0.00627

Table

Initial Initial Step size Approx Exact Error

1.00000 0.05000 1.00000 1.00000 0.00000 0.05000 0.05513

mywbut.com

12

Page 13: Euler’s method, Runge Kutta methods, Predictor-Corrector methods

1.00000 0.05000 1.05256 1.05263

0.05539 0.06138

0.10000 1.05256 0.05000 1.11095 1.11111 0.00016 0.06171 0.06876

0.15000 1.11095 0.05000 1.17618 1.17647 0.00029 0.06917 0.07755

0.20000 1.17618 0.05000 1.24954 1.25000 0.00046 0.0781 0.08813

0.25000 1.24954 0.05000 1.33264 1.33333 0.00070 0.08880 0.10102

0.30000 1.33264 0.05000 1.42755 1.42857 0.00102 0.10190 0.11696

0.35000 1.42755 0.05000 1.53697 1.53846 0.00149 0.11812 0.13697

0.40000 1.53697 0.05000 1.66451 1.66667 0.00215 0.13853 0.16255

0.45000 1.66451 0.05000 1.81505 1.81818 0.00313 0.16472 0.19598

0.50000 1.81505 0.05000 1.99540 2.00000 0.00460 0.199082 0.24079

Runge-Kutta Method of Order We are now ready to state the algorithm for using the Runge-Kutta method of order . This is a generalization of the R-K method of order to higher order methods. Without getting into analytical details, we state the R-K method of order . It is widely used algorithm. For the IVPs ( ), set

for where . Also, we set . The for

, we define

mywbut.com

13

Page 14: Euler’s method, Runge Kutta methods, Predictor-Corrector methods

where

Remark 14.3.2 The local error in the R-K method of order is . To achieve this error, we are

forced to do more computation or in other words, spend more time to compute and . It all depends on the nature of the function to estimate the time consumed for the computation. The cost we pay for higher accuracy is more computation. Also, to reduce the local error, we need smaller values of the step size , which again results in large number of computation. Each computation leads to more of rounding errors. In other words, reduction in discretisation error may lead to increase in rounding off error. THE MORAL IS THAT THE INDISCRIMINATE REDUCTION OF STEP-SIZE NEED NOT MEAN MORE ACCURACY.

Figure: Flow-Chart of Runge-Kutta method of order

EXERCISE 14.3.3 Use the Runge-Kutta method of order to find an approximate solution of the IVP

with step size . Also, calculate the error and tabulate the results.

mywbut.com

14

Page 15: Euler’s method, Runge Kutta methods, Predictor-Corrector methods

Predictor-Corrector Methods In Sections and , during the course of the discussion on the Euler's algorithm, the value of

has been approximated to . On the other hand, we could have also

considered its approximate value by . We could have thought of it to solve the IVP (numerically) by defining the approximations

(14.4.15)

with the initial value . An equation of the type ( ) for is called an implicit

equation for . On many occasions solving equation ( ) could be too tough and so we

resort to (numerical) approximate value for . Note that we are concentrating more on

computing , for a fixed integer lying between 0 and . To start with, we let

and for , we define

(14.4.16)

Essentially, we are trying to iterate for . we need to stop this iteration at some stage and then

``find the value" of and designate it as . One method of stopping the iteration is when

mywbut.com

15

Page 16: Euler’s method, Runge Kutta methods, Predictor-Corrector methods

is ``small" (small have means that the absolute value of the ratio is lesser than an assigned

(previously) small number). we repeat the process with in place of and in place of . In general, ( ) allows us to recursively define

for and . The iterated values are called THE

INNER ITERATIONS for .

Some more terminologies: Normally, an explicit method like the Euler's method or the R-K methods are known as open type methods or algorithms. They are readily available for computation and the starters are known. On the other hand, implicit method as described by ( ) is called closed type. Many a times, it may happen that the starters for the (approximate solution) for closed type method is

obtained from the open type one. The starter for ( ) is also familiarly known as a

Predictor whereas the value (so computed) is called a corrector. In short, we predict the

value and correct it (by iteration) to obtain . For this reason such methods are called PREDICTOR-CORRECTOR MEHTODS, (in short PC methods). Again, we repeat that PC methods need some condition to step the inner iterations, usually they are:

1. the number of iterations (called the tolerance on the number of iterations), 2. a bound on the relative error (called the tolerance on the relative error).

As for as condition is concerned, it simply says we do not wish to iterate beyond iterations, while condition says that keep iterating till the relative error is small, no matter how many iterations are needed. The number is known as the tolerance. On occasions many use both the conditions and to stop the inner iterations, which even leads to early termination.

With these preliminaries, we state the Predictor-Corrector algorithm.

mywbut.com

16

Page 17: Euler’s method, Runge Kutta methods, Predictor-Corrector methods

Algorithm for Predictor-Corrector Method

For the IVP ( ), set and choose a step size . Let

with . Let be the tolerance or the number of iteration for

each step. Let be the tolerance on the relative error at each step. For

1. compute

2. for , compute

3. Stop the iterations whenever the absolute of the relative error for some previously prescribed positive real number .

Remark 14.4.1 The tolerance in the number of inner iteration has been incorporated in by setting

.

Figure: Flow-Chart of Predictor-Corrector Method

mywbut.com

17