Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf ·...

40
Copyright 2006, C. W. Gear L4-1 Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico, January 2007 Special methods (methods for problems with special properties) As with any computational problem, if one recognizes the particular nature of the problem one can often construct a better method for it. In this lecture we will talk about problems that could possibly be solved by conventional ODE methods, but that would take so much computer time that one has to look for better methods. In a sense, stiff solvers fit into this category, but in practice, most stiff problems cannot be solved by non-stiff methods because the number of integration steps would be so large that numerical round-off errors would begin to accumulate to a significant size before the computation was finished. (DAEs, on the other hand, are a set of problems that cannot be solved by conventional ODE methods.) As with virtually all numerical computation, recognizing either the special properties of the problem or the special properties of the solutions (or sometimes both) can decrease computation. There are four main characteristics that we will exploit in looking for faster methods in the slides that follow. Sometimes a task will have more than one of these. 1. The approximate location of the eigenvalues of a stiff problem is known (for example, in a parabolic problem like the heat equation, we know that the eigenvalues range along the negative real axis and we know the location of the leftmost (most negative) fairly accurately from the spatial discretization. 2. The solution is known to have a particular property – for example it is periodic 3. The solution is composed of two (or more) components of greatly different time scales (the so-called multi-scale problem) 4. While the problem is high-dimensional it is known that the solution lies very close to a much lower-dimensional manifold. Some of the material covered in this lecture is the subject of active ongoing research around the world. The increasing size and speed of computers makes it feasible to contemplate much larger problems, and, historically, improvements in computation have come about in equals parts from raw hardware speed, computer architecture (e.g., parallelism) and new numerical methods.

Transcript of Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf ·...

Page 1: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-1

Numerical Methods for Evolutionary Systems – Lecture 4C. W. Gear

Celaya, Mexico, January 2007Special methods (methods for problems with special properties)

As with any computational problem, if one recognizes the particular nature of the problem one can often construct a better method for it. In this lecture we will talk about problems that could possibly be solved by conventional ODE methods, but that would take so much computer time that one has to look for better methods. In a sense, stiff solvers fit into this category, but in practice, most stiff problems cannot be solved by non-stiff methods because the number of integration steps would be so large that numerical round-off errors would begin to accumulate to a significant size before the computation was finished. (DAEs, on the other hand, are a set of problems that cannot be solved by conventional ODE methods.) As with virtually all numerical computation, recognizing either the special properties of the problem or the special properties of the solutions (or sometimes both) can decrease computation. There are four main characteristics that we will exploit in looking for faster methods in the slides that follow. Sometimes a task will have more than one of these.

1. The approximate location of the eigenvalues of a stiff problem is known (for example, in a parabolic problem like the heat equation, we know that the eigenvalues range along the negative real axis and we know the location of the leftmost (most negative) fairly accurately from the spatial discretization.

2. The solution is known to have a particular property – for example it is periodic3. The solution is composed of two (or more) components of greatly different time scales

(the so-called multi-scale problem)4. While the problem is high-dimensional it is known that the solution lies very close to a

much lower-dimensional manifold.Some of the material covered in this lecture is the subject of active ongoing research around the world. The increasing size and speed of computers makes it feasible to contemplate much larger problems, and, historically, improvements in computation have come about in equals parts from raw hardware speed, computer architecture (e.g., parallelism) and new numerical methods.

Page 2: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-2

Explicit Methods for Stiff Problems.Earlier we said that methods for stiff problems had to be implicit – although, more precisely, we said that if a problem could be arbitrarily stiff (i.e., have arbitrarily large eigenvalues) we had to use an implicit method. In the case of the heat equation, for example, we know the size of the most negative eigenvalue because it is determined by the size of the spatial discretization and we know that all of the eigenvalues are real and negative. Suppose, then, that we look for methods that are stable only for the section of the negative real axis where these eigenvalues lie.The first approach to this was based on what are known as Runge-Kutta Chebyshev (RKC) methods. In Lecture 2 we discussed the general explicit RK method with q stages which is

2

0 1 1 0 0 10

1

10

( ) ; ( ) ; ( ) ;

q

n n q n q i ii

q

n n i ii

k h f y k h f y k k h f y k

y y h k

β β

γ

−=

+=

= = + = +

= +

∑When we talked about these methods before, the coefficients were chosen to get the maximum order possible. Instead, in the RKC-like approach, we choose the coefficients to get the most desirable stability region while achieving a relatively low order. Since, when we are basically solving PDEs like the heat equation we are typically not interested in more than a couple of digits of accuracy, we are usually happy with order 1 or 2 for these methods. If we look at the amplification matrix of the explicit RK method applied to the test equation y' = λywe find that it is a q-th degree polynomial in μ = hλ so that we have:

21 1 2[1 ]q

n q ny a a a yμ μ μ+ = + + + +where the aj are polynomials in the RK coefficients. If the method has first order, a1 = 1. if the method has second order, a2 = ½, etc. Now we want to choose the other aj so that the region of stability is the most suitable for our application. (Note that a1 = 1 and a2 = ½ are necessary andsufficient conditions for 2nd order, but this is not true for 3rd and higher orders.

Page 3: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-3

21 2[1 ]q

n q ny a a yμ μ μ+ = + + + +Let us start by considering a first-order method for which the stability polynomial is

If we are dealing with the heat equation for which all μ are negative real, we would like this polynomial to be no larger than 1 (in magnitude) for all values of μ between 0 and –K where K is as large as possible. The reason for the name Chebyshev in this method is because we can use the Chebyshev polynomial to achieve this goal. The Chebyshev polynomial Tq(x) of degree q on the interval [-1,1] is the polynomial such that |Tq(x)| ≤ 1 for –1 ≤ x ≤ +1that reaches its maxima (at +1) and minima (at -1) precisely q+1 times in [-1,1]. Two of those extrema will be at the end points.

The Chebyshev polynomial of degree q on [-1,1] is given by Tq(x) = cos[q cos-1(x)]While Chebyshev polynomials are usually used for minimax approximation, they have important other uses in "covering" an area with a polynomial that does not exceed 1 in magnitude. In this case we want to cover the area μ in [-K,0] so we replace x with 1 + 2μ/K. Now we need the term in μ in the power series expansion to have the coefficient 1 so thatthe method is first order. If you like a little simple trigonometry, you can then calculate what K will be and you will find that it is K = 2q2. Let us examine what this means for the heat equation on the spatial interval [0,1]. We discussed this in Lecture 2 (slide 27) and saw that if we used a finite difference method with N intervals of length 1/N over [0,1] and had the initial-boundary value problem below, the eigenvalues ranged approximately from –π2 to -4N2.

u(x,0) given0 1 x

t

u(0,t)given

u(1,t)given

Page 4: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-4

This time, let us suppose we have a "driving term" in the heat equation so that it is

2

2 ( , )u u g x tt t

μ∂ ∂= +

∂ ∂This could happen if, for example, this is the heat flow in an unstirred mixture undergoing a chemical reaction producing heat – then the g(x,t) on the right-hand side would be the heat generated locally. The boundary conditions represent the supposedly prescribed temperatures at either end of the reactor. The rate of change of the solution, u(x,t) is going to determine the spatial discretization and the time step we would like to use. Let us suppose, for the purpose of discussion of the pros and cons of the RKC method, that the time step we need to use is h = 10-1

and that we need N space intervals. Hence the most negative value of μ = hλ we encounter is -4hN2 = -0.4N2. Thus we need K ≥ 0.4N2 so we must choose q2 ≥ 0.2N2, or q ≈ 0.45N. It may seem that using the large q that this method calls for will be expensive. However, note that the number of RK stages increases only with N (although the computational cost will increase with N2 since the amount of computation for each stage is proportional to the number of equations, N). However, the alternative, which is to use an implicit method, involves matrix arithmetic, and the computational costs of this can increase with N3 in the worst case. For this simple example, iterative methods could be used for the matrix arithmetic and good preconditioners are known so RKC would probably not be more efficient. However, if the problem is such that we do not know fast ways to handle the matrix issues, the RKC method can be efficient – and it has the advantage that the code is straightforward. There are two problems with the RKC method as discussed this far – it is only first order, and it is not damping at q-1 points along the negative real axis because the stability polynomial actually reaches 1 in magnitude at the ends of the stability interval and at q-1 points in the interior. The problems dues to a lack of damping were illustrated in Lecture 2 (slide 19). Errors remain and may oscillate.

Page 5: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-5

(The diagrams above are taken from E. Hairer and G. Wanner, Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Equations, 2nd ed. 1996, Springer-Verlag for the purposes of this lecture, and should not be further distributed without permission from the publisher!)

Below we see the stability region for the 1st order RKC method with q = 4 (here called s).

Note that it extends to -32 (as our analysis showed) and the boundary touches the real axis at q-1 = 3 points between 0 and -32. To achieve damping, the polynomial is usually modifedslightly so that it is damping in the interior. It is then no longer optimal in terms of the distance along the real axis, but not much is lost, as can be seen in the modified method below.

Page 6: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-6

B. P. Sommeijer, L. F. Shampine, and J. G. Verwer, RKC: An Explicit Solver for Parabolic PDEs, J. Comput. Appl. Math. 88 (2) (1998) pp 315-326

J. G. Verwer, Explicit Runge-Kutta Methods for Parabolic Partial Differential Equations, Appl. Num. Math., 22 (1996) pp 359-379

In the above discussion of the RKC method, we only considered first order methods. Usually we need to go to at least 2nd order methods in time. There is no longer any theory that tells us what the optimal methods are, but some polynomials have been developed that are believed to be close to optimal, and have a stability range of about 0.81q2. (This is about 40% of the range of a first order method, the price of the increased accuracy.) While the first order methods were actually known more than 100 years ago and the extended ranges of stability have been considered by many authors, it does not seem that a systematic study of them took place until the 90's.

For further information see the papers below and references therein

Page 7: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-7

RKC methods create a region of stability that extends along the negative real axis. Other authors have looked at crafting methods that shape the stability regions to the areas of importance, and this is still a developing field. We will discuss one other approach here – not because it is the best, but because it is the one we know most about having been involved in its development! In the end the "best" methods will be those for which reasonable general purpose software can be developed and distributed, and, apart from RKC, that does not appear to have happened yet.

PROJECTIVE METHODS

This is the name given to a class of methods that start with the following idea:Suppose we had a technique for integrating a differential system from t to t+Δt. (This technique could be one of the integration methods we have already discussed, it could be a "legacy code" – a code that was written long ago that we don't dare try to change, or it could be a simulator of a microscopic model of the system. We will call it the inner integrator.) Let us suppose that we start from y0 and integrate for k+1 steps to get to yk+1. Now let us use a simple approximation through some of these points to extrapolate to a future point. For example, we could put a straight line through the last two points, yk and yk+1 to estimate a future point. Let us suppose our computed points have been at tj = jh, j = 0, 1, …, k+1 and let us take a projective step to tk+1+M = (k+1+M)h = sh (where M does not have to be an integer). For such a simple calculation, we have

yk+1+M = (M+1)yk+1 – Myk

and we call this the Projective Forward Euler (because it the analog of the 1st order Forward Euler).This method uses k+1 standard integration steps, but lets us advanced a distance equal to s = k+1+Msteps. A diagram of this method is shown on the next slide with k = 2.

Page 8: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-8

Projective Forward Euler Method with k = 2 -- linear fit to last two points

One outer integrator step -Φ1

The stability region of this method in the hλ-plane for the standard test equation y' = λydepends on k and M but it also depends on the inner integrator which we haven't specified. Hence, we assume that the inner integrator has an amplification factor of ρ and plot the stability region in the ρ-plane. These regions are shown on the next slide for a couple of cases. These are taken from: C. W. Gear and I. G. Kevreidis, Projective Methods for Stiff Differential Equations, SIAM J. Sci. Comp. 24(4) pp1091-1106 (2003).

An intuitive interpretation of what is happening is that the first k steps of the inner integrator are damping out fast components, and then the slow components are approximated in the projective step using extrapolation.

Page 9: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-9

-0.5 0 0.5 1-0.5

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5

-0.5 0 0.5 1-0.5

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5

Note that when M is relatively small, there is a single region, but as M becomes large relative to k(roughly when M > 3.6k) the region splits into two parts. This means that we can use a large M, but only if the eigenvalues are clustered into two regions – one near the origin and one near ρ = 0.

Although these methods were originally intended for use with legacy codes or microscopic simulators (something we will discuss later in this lecture) they can be considered as integration methods in their own right, provided we specify an inner integrator. Suppose we use Forward Euler as the inner integrator. For it we have ρ = 1+ hλ. In that case the ρ-plane is simply the hλ-plane shifted to the right by 1 unit and the stability diagrams to the left can be interpreted directly as if they were in the hλ-plane.

One could use a higher order extrapolant for the projective step and get similar results. In the paper cited above, putting a higher degree polynomial through more points was discussed -as shown on the next slide.

k = 2, M = 5.

k = 2, M = 9.

Page 10: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-10

-0.5 0 0.5 1-0.5

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5

This shows the stability region for a method that starts with 4 inner steps, then takes 4 more and fits a 4th degree polynomial through the last 5 points to get a fourth order outer method. As with the projective Forward Euler, we get two regions of stability - one near the ρ = 1 that looks very similar to the stability region of the explicit 4th order RK method shown on slide L2-10, and one near the origin in the ρ-plane. Note that if we are using Forward Euler as the inner integrator, the origin in the ρ-plane is equal to the hλ = -1 in the hλ-plane. Thus it might not appear that we have extended the stability region very far, but if we view this method as one that actually takes a step of length sh using some number of function evaluations smaller than s, then in the shλ-plane the second stability region is at –s.

Methods like this are useful when we have a clear separation of eigenvalues into two clusters.

4th order method using 8 inner steps and a4th order projective step from the last 5 points.

However, such problems are relatively rare (they may happen in multiscale modeling that we will discuss later). It is much more likely that we will either have several clusters of eigenvalues (likely to happen in a large reaction kinetic system where there may be several groups of time scales), or a spread of eigenvalues (as in parabolic equations). For both ot these cases we can apply an idea called telescopic projective integrationwhich layers projective integration on top of projective integration. This is discussed on the next slides.

Page 11: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-11

Two-level Projection method (k = 2 at both levels)

y0 y3+M y6+2M y9+3M

Telescopic Projective IntegrationRecall that in projective integration we used an inner integrator to advance some steps and damp the fast components, and then used a projective step to jump ahead (as far as possible while maintaining stability and accuracy). The combination is another integration method and the idea of telescopic projective integration is to use a projective integrator as the inner integrator for another level of projection, and so on as many times as needed. This is illustrated below for two levels of projective integration. In this illustration, the first level of projective integration has a step size h1 = (k+1+M)h0 = (3+M)h0. The second level has a step size h2 = (3+M)h1 = (3+M)2h0.We can see that after p levels of telescoping we would have a step size of hp = (3+M)ph0 – that is, the step size increases exponentially.

We can use teleprojection to either put down several different regions of stability, or to get an extended region of stability similar to the RKC method. We do not have to use the same k and M at each teleprojective level, although we will in our examples below. If M is relatively large compared to k then there will be a split into multiple regions as we saw earlier and will see on the next slide.

Page 12: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-12

−1 −0.8 −0.6 −0.4 −0.2 0−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4Euler 2 − 9 method. Iteration 2

This stability diagram below is for two levels of projective integration with k = 2 and M = 9 at each level. This diagram is plotted in the h0λ-plane, but after the two levels of teleprojection the outer step is actually h2 = (3+9)2h0 = 144h0. Thus, in the h2λ-plane the left-most stability region is actually around -144. The middle region is centered around -1/12 in the diagram, which is -12 in the h2λ-plane, while the rightmost – which seems very small in this picture, is close to a disk centered at -1 and with radius 1 in the h2λ-plane – that is, it is roughly the stability region of the outer Forward-Euler like method. What we see is that if we pick a series of step sizes at the different levels of projection, here h0, h1, and h2, we will get regions that are stable for eigenvalues of sizes around -1/h0, -1/h1, and -1/h2.

Page 13: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-13

-0.4 -0.2 0 0.2 0.4 0.6 0.8 1-0.6

-0.4

-0.2

0

0.2

0.4

0.6

If we make M small (compared to k) then there will only be one extended region of stability. The diagram below shows the stability regions of 10 levels of projective integration using k = 2 and M = 2. This is in the ρ-plane, so subtract 1 from the real axis to get the h0λ-plane. However, note that the outer step size is h10 = (3+2)10h0 = 9,765,625h0 so the region is very large in the h10λ-plane! It will take 310 = 59,049 function evaluations

This work is described in C. W. Gear and I. G. Kevrekidis Telescopic Projective Methods for Stiff Differential Equations, JCP. 187, #1, pp 95-109, (2003).

Page 14: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-14

As we make M larger, the boundary of the stability region squeezes down until it touches the negative real axis. With k = 2 this happens when M = 3. The stability region for this case is shown below for 10 levels of telescoping. (Note that the boundary actually touches the negative real axis at each point where it “pinches off” although the accuracy of plotting doesn’t show it.) Now the outer step size is h10 = (3+3)10h0 = 60,466,176h0. It takes the same 59,049 function evaluations as the method on the previous slide.

Page 15: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-15

Method for getting complex pairs of stability regions

We sometimes need to damp eigenvalues out in the complex plane. We could make the region around the negative real axis very "thick," or we could try to specifically damp eigenvalues in the complex plane.If we use k+2 inner steps and then do a projective step using the last three points using the formula

yk+2+M =rM2yk+2+(M-2rM2)yk+1 -(M-1-rM2)yk

we get a stability polynomial of the form

σ = ρk[1 + M(ρ-1) + rM2 (ρ-1)2]

This gives the pair of roots

We can adjust r and M to place the zeros of this in any desired pair of complex conjugates.

A use of this method combined with a simple projective Forward Euler using 5 levels is shown in the next slide:

1 1 4( 1)2

rMr

ρ − ± −− =

Page 16: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-16

The plot above is the stability region of 4-level projective method in the ρ = 1+h0λ plane. The sizes of steps at each level are h0, 4h0, 20h0, 80h0, and 880h0. The outermost is Projective Forward Euler (k=1, M=9), so the right-most region - a dot in the figure at (0,1) is near the stability region for the forward Euler with step size 880h0. The first and third levels used a method discussed on the previous slide with with r=0.3 and r=0.35 respectively (with k=1, M=3) to place stability regions around 0.44±0.25i, and 0.976±0.015i respectively. The second level method was PFE with k=1, M=3 placing a region near 0.95. Larger values of k would make the regions (except for the rightmost) larger.

Page 17: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-17

The method we have just described can be cast as a Runge-Kutta method, so its region of stability cannot exceed those of the optimum RKC methods. They are, however only a modest factor smaller than the optimum and they are more adaptive.

The method we have just described is only of first order, and this is often inadequate – although for large problems second order is often sufficient. Instead of using a first order method as the outermost integrator we can use a second order method based either on an Adams Bashforth 2-step method, or a 2-stage Runge-Kutta method. Since all of the inner steps are only first order, this would apparently lead only to a first-order method, but there is a "trick" we can use that is applicable in many situations, and that is to select the coefficients of the outermost projective step to "mop up" the second-order errors created in all of the inner steps, thus getting a second-order method. This technique, in the context of teleprojective integrators, is discussed in S. Lee and C. W. Gear, Second-order accurate projective integrators for multiscale problems, to appear, JCAM.

The details are algebraically messy and not worth discussing here. The basic idea is that one can find expressions for the error term as it propagates along an integration method of a particular order, say q. Periodically one could take a different type of step with enough parameters to allow one to also annihilate the hq+1 terms in the error. Instead of just zeroing them in the current step, one could zero the amount propagated from the previous steps as well, thus getting an answer of one order higher accuracy at that step.

Page 18: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-18

We have just discussed ways in which we can solve stiff equations with explicit methods to save the computation involved in the matrix arithmetic of implicit methods. DAEs also normally use implicit methods. When we discussed DAEs in lecture 3, one of the approaches we discussed was regularization which converted them into stiff ODEs (although we noted that this might not provide any great advantage because the large step size of stiff ODE solvers might make them appear numerically no different than DAEs).However, if we can place the eigenvalues at known points we have the possibility of using projective methods that annihilate eigenvalues only in certain specific regions. This looks like a possible approach for DAEs (although it is still a topic of current study). In Lecture 3 we discussed various forms of regularization for DAEs, including the following for an index-3 Euler-Lagrange system:

1

1

[ ] ( )[ ] ( )

T T

T T

x v G GG g xv f G GG g x

β

β

′ = −

′ ′= −If x and v are n-dimensional vectors and the constraint g(x) is an m-dimensional vector this transformation adds 2m additional degrees of freedom to the system, and the 2m additional eigenvalues are all approximately –β. Hence, if we use an explicit projective method where the inner integrator is Forward Euler with step size 1/β we will damp the fast eigencomponentsvery quickly (because the amplification of Forward Euler is 1 + hλ which in this case will be zero so that, for a linear problem the damping would happen in one step). In the multiple pendula example pictured on the next slide, we illustrate the use of this method with 7 pendula, each coupled to its two neighbors in 3 dimensions. The original DAE has 21-dimensional x and v (3 coordinates for each of the 7 pendula) and 14 constraints (the 7 supporting rods and the 7 connecting rods). The derivative function subroutine takes small two FE steps to damp the 14 fast eigencomponents. This gives updated values of x and v. It then evaluates the derivative. The integration is handled by the non-stiff Matlab Adams code ode113 with a slight modification so that the it accepts both a derivative and a modified x and v from the function subroutine.

Page 19: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-19

Example Problem: multiple pendulum

M pendula suspended from points equi-spaced around horizontal circle

Each linked to its two nearest neighbors by rigid rod such that at rest the pedula balls are equi-spaced on a smaller horizontal circle.

Page 20: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-20

The above pendula example was integrated using three different approaches – the projective integration method just described, and stiff integrations of two regularized systems. Some results are shown below. Methods S1 and S2 are both stiff integrators on a regularized system. The regularization used in S1 was similar to that of Baumgarte but with a slight modification. (The one due to Baumgarte was not as effective in this case.) The regularization used in S2 was the same as used in the projective integrator case and its equations are given above.

The results show the computer time (which is not too meaningful in absolute terms since this is using Matlab and does not represent the fastest code that could be written). It also shows the energy loss of an integration interval as a measure of the accuracy. (We do not have a "true" solution to compare against the computed results for this problem. Generally it is better to construct test problems with known solutions, but then it is not always possible to get realistic problems.)

For details see: C. W. Gear Towards Explicit Methods for DAEsBIT Num. Math 46 pp 505-514 (2006)

Page 21: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-21

Utilizing Knowledge of the Solution

Whenever we are approximating a function numerically on a computer, the more we know about the function, the more accurate we can make the approximation for the same amount of memory or computational cost. For example, if we know that a function is periodic, we are likely to approximate it with a Fourier series rather than a power series because we will normally get more accuracy for the same number of terms in the series. The same is true when we are solving differential equations – if we know something about the solution we will generally get more accurate answers (or answers of the same accuracy at lower cost) by making use of that knowledge in our choice of representation.

When we solve differential equations numerically we are making an assumption about how best to approximate the solution. In the methods we have discussed so far, that assumption is that the solution can be accurately represented (over a single integration step) by a few terms of a power series. Thus, a 4th order method is exact for solutions that are 4th-degree polynomials.What we are doing is saying that we can get a good approximation to the solution, y(t), with a few terms of

0

0

( )

( ) ( )

nnn

n nn

y t a t

y t c tφ

=

=

=

=

This is just a special case of a general expansion is a set of basis functions, Φn(t), which are chosen so that they are linearly independent

We want to choose the basis functions, Φn(t), so that y(t) is well approximated with only a few terms. Thus, if we know that the solution is close to sinusoidal with a known frequency, we might use a Fourier series based on that frequency. If we knew that the solution contained exponentially varying components, we might use an expansion in exponential functions.

Page 22: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-22

One can construct integration formula that are exact for the first few terms of an expansion in any given set of basis functions very simply. Before when we discussed methods we used some short cuts to develop them. For example, we showed how to get Adams methods by integrating an approximating polynomial through the derivatives. There is a different view of what we are doing which leads to the same result, but which can be used to handle any set of basis functions.Let us consider an Adams method which is

1 1 -0 q

n n i n iiy y h h yβ+ +=

′= + ∑If we want to make this exact for all polynomials of degree k we can simply substitute y(t) = tj for j = 0, 1, 2, … in this equation to get linear equations in the βi. Note that the equation is automatically satisfied for j = 0. We get one equation for each j > 0 so, since there are q+1 unknown coefficients, we can satisfy q+1 equations and get a q+1st order method.Similarly, if we wished to make it exact for the basis functions sin(t), cos(t), sin(2t), cos(2t), …we would simply substitute these functions and get q+1 linear equations in the same way.Methods based on trigonometric functions have been known for 50 years, and have found use, for example, in the simulation of a helicopter rotor system where there is a known frequency of rotation and many of the waveforms in the system are tied to that frequency and its harmonics.Using sinusoidal functions is just a special case of using general exponential functions since exp(it) includes both sine and cosine components. If one knew some of the slow eigenvalues of the system and thought that these were the dominant components, then making a method exact for those would be valuable. We use polynomials when we have no special information because they give a good approximation over a range of smooth functions. If we are making methods exact for some particular functions, it is often worth while also making it exact for a few of the lower degree polynomials.

Page 23: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-23

Another class of methods is called exponential methods. These are designed to be exact for problems like y' = Ay and variations on it. They make direct use of the matrix, A, in some form of an approximation to the matrix exponential, exp(Ah). For example, if the ODE is

y' = f(y,t) = Ay + g(t) (1)then we know that the solution is

( )

0( ) ( 0 ) ( )

t A t sy t y e g s d s−= + ∫If we also knew that g(t) was a low degree polynomial we could create a method that would be exact for this A and polynomial g(t) up to some order. Of course, in most cases we do not have the simple form (1) but if our method for (1) has a form like

yn+1 = α1(hA)yn + hβ1(hA) gn + hβ2 (hA) gn-1 … + hβq+1 (hA) gn-qwhere the integration coefficients α and β depend on hA we can also handle problems like

y' = f(y,t) = Ay + g(y,t) (1)

and we can then operate on the general problem y' = f(y,t) = Ay +[f(y,t) – Ay] where A is an approximation to the Jacobian of f.There are hundreds of approaches in the literature so it is out of the question to try to survey them here. If you have problems that might benefit from such methods, the best bet is to do a Web search for Exponential integrators.However, be aware that all of these methods require the computation of an approximation to the matrix exponential, and if the matrix has large eigenvalues, this is a challenging problem that may involve more work that a direct integration method.

Page 24: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-24

Multiscale Problems

As computers have grown larger and faster there has been increasing interest in the issue of multiscale problems. Prior to the availability of supercomputers with very large memories and high speed, one used mathematics and intuition to simplify a problem until it was of a reasonable size for computation. That was usually done by ignoring many small effects in the problem to get a simplified model. However, there are a lot of problems where, for one reason or another, we have to pay attention to small features even though we are only interested in the solution at the large scale. Among many problems of this type we would include large-scale fluid flow (where turbulence, which is a much smaller scale phenomenon, can have a major impact on the overall flow), material behavior under high stress conditions (where failures may initiate in small-scale cracks), reactions at very low species concentrations (where probability effects may be significant), gas flow at very low densities (where one may need to model the gas as a set of molecules), and flow through a random media (as in oil recovery) – just to mention a few examples. Some people will also classify problems in which the spatial or time scales of the activity changes a lot from place to place (or time to time) as "multiscale." Under this definition, any problem in which one needed a finer grid in one place than another could be classified as "multiscale." In this lecture I will restrict "multiscale" to refer to problems in which the nature of the system needed to model the phenomena of interest varies with scale. Within this category of problems we can distinguish two cases:

1. In some regions we have to user a model which looks at more detail (e.g., crack propagation, or an atomistic rather than continuum model, etc.)

2. To get the correct behavior we somehow have to take account of a fine-scale, detailed model everywhere, even though the behavior is on a large scale

Page 25: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-25

In the first case, a big issue is how to couple two different models at any interface region. For example, if we have a particle model of a gas in one region and a continuum fluid model in another, how to "match boundary conditions" at the interface. This is a difficult problem, and it's solution depends very much on the type of problem, so it is not really possible to give general guidelines.In the second case, we have a detailed model of the system, for example, a particle model of a gas, but we really want to model the system over spatial regions much, much larger than the typical intra-particle spacing, and over times much, much longer than the time between particle collisions.The latter type of problem has received a lot of attention in the last few years. The objective is to do, computationally, what one can do mathematically for a few of the problems. For example, gas flow was modeled by the continuum Navier-Stokes equations a long time ago (and they can be derived more or less from first principles and the conservation equations). Statistical mechanics tells us how derive these equations from a particle model in the limit as the number of particles gets large. In the same way, reaction kinetic equations are the limit as the number of atoms in a system gets very large, of a probabilistic model of chemical reactions. We call these limiting continuum models closures of the more detailed model. Many mathematical models we use in practice are the closure of finer scale models.If we can find the closure of a detailed model, it is usually more efficient to work with that model on the large scale, and it is only when it breaks down (due to small concentrations, etc) that we need to use a more detailed model. However, as we consider an increasing number of effects in the detailed model, we quickly reach a point where we cannot find the mathematical closure of the system, even though we believe it exists. In that case, we may need to compute with the fine-scale model, although we know that the behavior changes only on a coarse scale.In another case of multiple scales, we have a very fine scale behavior (for example, a high-frequency oscillator that is changing slowly, or a fine scale random media like sand where the local flow varies a lot but whose properties can be averaged over a large scale) and we wish to consider only the large scale changes.

Page 26: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-26

The first type of problem is what has been called Equation Free Computing by Kevrekidis et al: I. G. Kevrekidis, C. W. Gear, J. M. Hyman, P. G. Kevrekidis, O. Runborg, C. Theodoropoulos, Equation-free, coarse-grained multiscale computation: Enabling microscopic simulators to perform system-level tasks, Comm. Math. Sci. 1~(4) (2003) pp 715--762. The idea is that one simulates a microscopic model for a period of time that is long on the microscopic scale, but short on the coarse, macroscopic scale, in order to evaluate the chord slope of the trajectory of the coarse variables. For example, if the microscopic model was a particle model of gas flow, the macroscopic variables would be a few moments of that flow such as pressure, density, velocity, etc.

A coarse integration step at the macroscopic level involves starting with the macroscopic state, lifting it to a microscopic representation (e.g., representing the velocity and density by an arrangement of particles), simulating at the microscopic level for a while, and then restricting back to the coarse level at a couple of points to enable the chord slope to be calculated. then projectiveintegration as discussed earlier in this lecture can be used. The process takes the form:

Macroscopicdescription

Microscopicdescription

...Short step simulations

LiftingRestriction

Projective step

Solution

Restriction & Lifting with Projective Integration

Lifting

Page 27: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-27

In addition to long term-scale integration, the idea behind this process can be used to perform other numerical analysis tasks on a microscopic system. For example, we can find steady states, or perform bifurcation analysis on a system at the coarse level. Instead of using the chord slope as the input to an integrator, we solve the equation

Φ(y) = 0 (1)where Φ(y) is the chord slope at y using any standard zero finder. A typical zero finder code (for example, fsolve in MatLab) simply requires a function of an argument and a starting point. The code that evaluates Φ(y) provides the function. (We may, however, need to provide a good initial guess.) In this way we can analyze the unknown macroscopic equations in various ways without ever obtaining their mathematical form. We illustrate this with a trivial example (one for which we could easily do the mathematical analysis). If we were to start with a stochastic model, it would be the following:We have a collection of identical particles in one-dimensional space. The particles are subject to three "forces." They are being attracted back to the origin with a velocity equal to λ times the distance from the origin (i.e., exponential decay), they are are subject to "white noise," and they have an additive "drift" velocity, m. We know that the system will settle down with a distribution of particles with some mean and standard deviation. (In fact, the final distribution will be a Gaussian in this case, but we won't use that fact.)Because of the added numerical complications of solving non-linear equations with stochastic noise that are not relevant to our discussion, we will handle the microscopic problem as a partial differential equation, and our macroscopic variables will just be the current mean and standard deviation.White noise is equivalent to diffusion so we basically have a parabolic partial differential of the form 2

2 ( )U U Ux mt x x

μ λ∂ ∂ ∂= + +

∂ ∂ ∂

Page 28: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-28

The EX40 Matlab program given on the next slide finds the steady state of this system by finding what values of the mean and standard deviation lead to a steady state. It contains a function subroutine that integrates over a distance T starting from a given mean and standard deviation to compute the changes to these variables. The first part of the main program calls the function 100 times to advance 100T in time and plots the answer as a function of time. The second part of the main program just calls the Matlab function fsolve to find the zero of the function subroutine.

The function subroutine lifts from the given mean, mn, and standard deviation, sd, to a representation of a function U(x) that is defined on 4N+1 discrete points. The points are chosen to be at locations mn-2*sd through mn+2*sd. By making the function zero everywhere except at the points at locations mn-sd and mn+sd where it is 0.5, we have obtained the correct mean and standard deviation.

Note that we are not initializing to the final distribution shape that, as we said earlier, is a Gaussian. If we did not know (or guess correctly) that the shape does not matter we would have to include some variables in our macroscopic description to approximate the shape.

We could, of course, check our assumption by starting at the steady state and running the problem for a large T to see if it made any difference.

When we execute this program we get the graph shown on the next slide. It can be seen that the solution is converging, but it will take some time. By using fsolve we get the steady state directly as sd = 0.7246, mn = 0.1000.

Page 29: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-290 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Time

Standard DeviationMean

% Simple example of microscopic/macroscopic simulationglobal N lambda T m mu% 4N+1 is Number points, decay rate, interval,% drift rate, diffusion rateN = 10; lambda = 1; m = 0.1; mu = 0.5;T = 1E-2; K = 100;sd0 = 1; mn0 = 0;sd = sd0; mn = mn0;SD = [sd0]; MN = [mn0];for i = 1:K

outV = fun40([sd; mn]);sd = sd + outV(1); mn = mn + outV(2);SD = [SD;sd]; MN = [MN; mn];

endinit =[sd0;mn0];plot((0:K)',SD,'-r',(0:K)',MN,'-b')print -dpsc Ex40

options = optimset('TolX',1E-8,'TolFun',1E-5);[v, val,exitflag,output] = fsolve(@fun40, init,options)

function outV = fun40(inV)% inV is vector of sd and meanglobal N lambda T m mu% Lift stepn = 4*N+1; %Total number of cellsdx = inV(1)/N;X = (-2*N:2*N)*dx + inV(2);U = zeros(size(X));U(N+1) = 0.5; U(3*N+1) = 0.5;t = 0;while t < T

Diffuse = mu/dx^2;Drift = (-X*lambda + m)/dx;Decr = abs(Drift) + 2*Diffuse;h = min(T-t, 0.9/max(Decr));LeftFlux = Diffuse*U + max(0,-Drift.*U);RightFlux = Diffuse*U + max(0,Drift.*U);U = U + ([0 RightFlux(1:n-1)] + [LeftFlux(2:n) 0] - Decr.*U)*h;U(1) = U(1) + LeftFlux(1)*h;U(n) = U(n) + RightFlux(n)*h;t = t+h;

end% Restriction stepnew_sd = sqrt(U*((X-U*X').^2)'); new_mn = U*X'; sd_change = new_sd - inV(1);mn_change = new_mn - inV(2);outV = [sd_change; mn_change];

Page 30: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-30

Had we applied the above the above process to the original stochastic model we would have been able to take a "snapshot" of the system at any time and compute the macroscopic variables (standard deviation and mean in that example). In such problems, although their microscopic properties are changing rapidly (for example, the positions of individual particles would have been changing rapidly in that example) the macroscopic quantities that can be evaluated from a snapshot are only changing slowly. There are some classes of problems in which there is no macroscopic property that is readily discernable from a snapshot solution. Consider, for example, the simple oscillator

y' = ωzz' = -ωy

whose solution is y = A.sin(ωt), z = A.cos(ωt). If we take a snapshot we see only a position which changes from snapshot to snapshot. (Of course, if we already knew the nature of the problem, we would know that the right macroscopic variable is y2 + z2 which is constant.)

In a problem like this we will have to look at the behavior over a time interval to understand what is happening. ( See, for example, W. E and B. Engquist, The heterogeneous multiscale method, Comm. Math. Sci. 1 (1) (2003) pp 87--132.) In these cases we will have to average appropriate combinations of variables to find the macroscopic variables. For example, in the oscillator case above we could estimate the standard deviation of y or z over time which would be an appropriate macroscopic variable.

Page 31: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-31

Problems that are nearly periodic provide a special opportunity for short cuts. The idea for these methods goes back to the early days of artificial satellites. Ideally, of course, an artificial satellite is in periodic orbit, so all that would be necessary would be to integrate for one orbital period (and even that would not be necessary because under the conditions in which it is exactly periodic we can write down the orbital equations exactly since it is an elipse). However, in practice, there is some drag from the small amount of gas present, some variations due to gravitational variations in the earth, some affect from solar radiation, etc, and the actual orbit will only be close to periodic. The graph below shows a nearly periodic function - actually sin(2πt)*exp(-0.02πt) – over 20 approximate periods. If we select a points at the same phase in each period and draw a smooth curve through them we get something that is a reasonable representation of the changing periodic function. We call it a quasi envelope.

0 20 40 60 80 100 120 140−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Page 32: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-32

0 50 100 150 200 250 300−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Given a point on the quasi envelope for a problem and the differential equation for the model, we can integrate over one period and find the next point one period later. (Note that if the equation is autonomous – does not explicitly depend on t, then it doesn't matter what point on the quasi envelope we start from. If it is not autonomous, but the driving term in t is responsible for the oscillations, then we know exactly how to get on the same phase. (If there is a driving term, but it is slow compared to the oscillations then we can also start at any point on the quasi envelope.)

The idea now is very similar to the projective integration discussed earlier – we compute the chord over one (or more) periods, and then approximate the quasi envelope over a very long time step. (In the sorts of problems now of interest here, such as oscillatory circuits on VLSI chips, we may be tracking a very slowly changing oscillation over millions of periods so it is important to be able to skip over large numbers of periods. Note that it does not matter if we only have an approximation to the period. We can still define the quasi envelope. The graph below does not have quite the right period for the adjacent points so there is slightly more change in the quasi envelope, but it still serves to define the function.

Page 33: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-33

The graph below illustrates the use of projective integration on the nearly periodic problem. The continuous oscillatory (magenta) curve is the solution over all cycles. The thicker (green) segments are the integration over one approximate period, while the smooth (blue) curve is the result of linear projection from each computed period over 4 non-computed periods. Note that the amplitude of the projective solution decreases somewhat more slowly. Use of a higher-order projective method would decrease this effect. Note also that we don't have exactly the period so that the quasi envelope is changing phase slowly. The code for this example is given on the next slide.

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Page 34: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-34

% Sample Oscillatory problem solutionfigure (1); hold offw = 1E3; % 2*pi/periodlam = 1E0; %dampingA = [[-lam w];[-w -lam]]; %System matrixN = 50; % Number of steps in one periodM = 4; % Number of steps to projecth = 2*pi/w/N; %Step sizeI = eye(2); % useful matrix - the identityy0 = [1; 0]; %Initial conditionsEnv = [y0]; T = [0];t = 0;for j = 1:10; % Run 10 projective steps

y = y0;Y = [y0];for i = 1:N % Integrate over one cycle

y = (I-h*A/2)\(I + h*A/2)*y; %One trapezoidal stepY = [Y y];

endplot(t+(0:N)*h,Y(2,:),'-g','LineWidth',3)hold ony0 = M*y - (M-1)*y0; % Projective stepEnv = [Env y0];t = t + (M+1)*N*h;T = [T t];

endplot(T,Env(2,:),'-ob','LineWidth',2)

% Now integrate normallyy = [1; 0]; %Initial conditionsY = [y];for i = 1:N*(M+1)*10 % Integrate over all cycles

y = (I-h*A/2)\(I + h*A/2)*y; %One trapezoidal stepY = [Y y];

endplot((0:N*(M+1)*10)*h,Y(2,:),'-m','LineWidth',1)print -dpsc Ex41a

Page 35: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-35

Low dimensional manifoldsMany of the problems we have looked at have low dimensional, slow manifolds. The very nature of stiff problems is that wherever we start the integration, it quickly falls onto a slow manifold – so that the problem of stiff equations was that of finding methods that would integrate in the slow manifold using large steps. DAEs were already restricted to manifolds of the total space – so in one sense the solution fell onto it infinitely rapidly – and that was what we tried to exploit with regularization methods that converted them back into stiff equations. In the multi-scale problems we were interested in computing on the slow manifold of the macroscopic variables. And singular perturbation problems deal directly with slow manifolds. In fact, many problems of interest have slow manifolds and if we can find ways to restrict our attention to these manifolds we can hope to compute much more rapidly. For example, if we can reduce the dimensionality of the problem to the dimension of the slow manifold, the amount of information we must deal with may be reduced. Because the manifold is slow, we might be able to take much large time steps, or otherwise speed the calculation.

In the process described on slide L4-27, the lifting process from the macroscopic variables to the microscopic variables may not find a point on the slow manifold, but the first few microscopic integration steps are assumed to bring one back down to the slow manifold (and it is also assumed that being off the slow manifold a little has an insignificant effect on the solution). There are cases where we need to find a point on the slow manifold corresponding to specific values of some variables.

We will discuss a method presented in: C. W. Gear and I. G. Kevrekidis, Constraint-defined manifolds: a legacy-code approach to low-dimensional computation, Journal of Scientific Computing, Vol. 25, Nos. 1/2, pp17-28, (2005), in a simple form and in C. W. Gear, T.J. Kaper, I.G. Kevrekidis, and A. Zagaris, Projecting to a Slow Manifold: Singularly Perturbed Systems and Legacy Codes, SIAM J. Applied Dynamical Systems, 4, pp 711-732(2005) and apply it to the simple problem used to illustrate equation-free computing on slide L4-28.

Page 36: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-36

Let us start with a singular perturbation problem as discussed in Lecture 3.

' ( , )' ( , )

y f y zz g y zε==

If we are given the initial value for y we would like to find a value for z that is on (or very close to) the slow manifold. Because we know that (under some modest conditions such as gz non-singular and a stable matrix – without which the problem does not have a reasonable solution) that the solution of this problem has an asymptotic expansion in ε as ε goes to zero, and that the zero-order term of this expansion is the solution of the DAE obtained by setting ε to zero, we know that choosing z so that g(y,z) = 0 will give us a point (y,z) that is close to the low-dimensional manifold of this problem. That is, we just ask for the initial derivative of z to be zero.

In a more general case, we do not have a system in the form (1) above. However, we will assume that the problem we have has similar properties but that there has been a change of variables (perhaps nonlinear) to a new set of variables u = u(y,z), v = v(y,z) but that we have no information about this change of variables. In fact, we are simply given a problem in some set of variables w, that we know (or think we know!) the dimension of the slow manifold, and that we can identify a subset of components of w, say u, that are sufficient to parameterize the slow manifold, as in the diagram below where u1 and u2 are sufficient to specify a point on the shaded manifold. The remaining variables will be v.

(1)

v

u1

u2

Page 37: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-37

u

v

z

y

The fact that we are assuming that there is the underlying structure of a singular perturbation problem means that in a local coordinate system (that we don't know) the solution family looks like th graph on the left below. Since the behavior is much faster in z than y, the trajectories off the slow manifold come in to the slow manifold with very rapid changes in z and small changes in y so the they are almost vertical in the y-z "plane" (y and z are multi-dimensional, so this is not a plane!) The criterion for finding a point close to the slow manifold is that it is a point where z has stopped changing – as shown in the inset box below where we find the z* corresponding to a given y*.

y*

z*

z

yy*

z*

When we have a different set of variables, as shown on the rightabove, it might at first seem that the approach will not work since we can't find z to make its derivative zero. However, the slightly surprising thing is that we can still get an order ε approximation to the slow manifold by setting the v' to zero for a given u. Intuitively the reason it still gives a good approximation is because trajectories coming into the slow manifold are essentially in the direction of the (unknown) z axis (axes) until they get within order ε close to the slow manifold. Hence, only if the z axis were almost horizontal in the u-v frame would we have trouble, but in that case, u would be a very poor parameterizer for the slow manifold.

Page 38: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-38

If we have a legacy code or a microscopic simulator, we may not be able to compute the derivatives to make them zero. However, we can integrate a short distance. In that case, our objective would be to choose v* to make the chord slope zero for a given value of u = u*. One could, of course, try to use a root finder for this process, but in a typical situation, the dimension of v is large so it is a difficult task for a general root finder. There is a simple iterative process that can be used in this situation. All we have to do is to start at a (u*, v) for some reasonable initial guess to v, and then in each iteration integrate one step forward in time (which updates the u and v variables, and then reset the u variable back to u*. Under most conditions this will converge from a nearby starting point, as can be seen in the illustration below.

Note that we do not have to know anything about ε for this process to work. However, if εis not very small, a first order approximation may be inadequate and we may want to look for a higher order one. The second paper a cited few slides back shows how we can get an order p approximation in ε by making the p+1st derivative of v zero instead of the first derivative.

uu*

v0

Δt

v1v2

v3

Page 39: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-39

Making a high derivative zero may seem like a tough challenge. Instead of doing that, we do the analog of what we did in the p = 0 case, and make the p+1st difference equal to zero. All we have to do to achieve this is an analog of the method on the previous slide is to integrate forward p+1 fixed steps (whether by a legacy integrator or a microscopic simulator, it doesn't matter) and then project backwards using a pth degree polynomial through the last p+1 points to get a new values for the v variables.

We will illustrate the p = 0 case using the earlier example from slide L4-28:2

2 ( )U U Ux mt x x

μ λ∂ ∂ ∂= + +

∂ ∂ ∂We want to find a distribution corresponding to an assigned mean and standard deviation. (We didn't bother with this before because it makes no difference to the problem we addressed then –finding the steady state of the two macroscopic variables. One could think of extensions to the problem where the initial distribution might be important.) The code for Matlab program Ex40.m on the next slide does this. It starts with an initial value for U which has the right mean and standard deviation, but consists of two peaks at mn0 – sd0 and mn0 + sd0. It integrates U for a short interval , DT, to get a new Uout and then projects back by resetting the mean and standard deviation of Uout to the initial values. This is done by simply rescaling Uout – which in this code is done by calculating what x points would be needed for Uout to have the right mean and standard deviation, and then interpolating in this data for the values at the original x points. Note that the interpolation will have some error so that the total "mass" in U might not be conserved. This is corrected (rather crudely) by simply rescaling so that the total mass is 1 again. The program pauses between each iteration and shows the successive shapes of each distribution in U. A plot showing a series of iterations is also given on the next slide. Note that the assumption here is the fast "v" variables (which are not explicitly computed) are in some sense "orthogonal" to the slow "u" variables here since we are assuming that whatever the "v" variables are, they are not being changed when we renormalize to reset the mean and standard deviation "u" variables to their initial values.

Page 40: Numerical Methods for Evolutionary Systems – Lecture 4 › ~wgear › Celaya › Lecture4.pdf · Numerical Methods for Evolutionary Systems – Lecture 4 C. W. Gear Celaya, Mexico,

Copyright 2006, C. W. GearL4-40

% Ex42.m Finding the slow manifold% We will start at sd0 and mn0 standard deviation and mean and find% A distribution that does not change (after being rescaled to the% same mean and standard deviation) over an interval DT

global X dx lambda DT m mu nN = 10; % Number of x points is 6N+1lambda = 1; m = 0.1; mu = 0.5;sd0 = .9; mn0 = 0;DT = 1E-1;%Form initial distributionn = 6*N+1; %Total number of cellsdx = sd0/N;X = (-3*N:3*N)*dx + mn0;U = zeros(size(X));U(2*N+1) = 0.5; U(4*N+1) = 0.5;

figure(1); hold offplot(X,U);hold onpausewhile 1

Uout = fun42(U);new_sd = sqrt(U*((X-U*X').^2)'); new_mn = U*X'; Xout = X - new_mn; % Set to mean 0Xout = Xout*sd0/new_sd; % Set to std sd0Xout = Xout + mn0; % Now set to mean mn0Xout = [-1E10 Xout 1E10]; % exterior points so that interp1 doesn't Uout = [0 Uout 0]; % interpolate beyond rangeUnew = interp1(Xout,Uout,X); % Interpolate in output at old X pointsUnew = Unew/sum(Unew); % Renormalization from interp errorsplot(X,U,'-r',X,Unew,'-b')U = Unew;pause

end

function OutU = fun42(U)%Integrate one intervalglobal X dx lambda DT m mu nt = 0;while t < DT

Diffuse = mu/dx^2;Drift = (-X*lambda + m)/dx;Decr = abs(Drift) + 2*Diffuse;h = min(DT-t, 0.9/max(Decr));LeftFlux = Diffuse*U + max(0,-Drift.*U);RightFlux = Diffuse*U + max(0,Drift.*U);U = U + ([0 RightFlux(1:n-1)] + ...

[LeftFlux(2:n) 0] - Decr.*U)*h;U(1) = U(1) + LeftFlux(1)*h;U(n) = U(n) + RightFlux(n)*h;t = t+h;

endOutU = U;

−3 −2 −1 0 1 2 30

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5