Peicheng Zhu  BCAM  Basque Center for Applied …€¦ · Peicheng Zhu Basque Center for ......
Embed Size (px)
Transcript of Peicheng Zhu  BCAM  Basque Center for Applied …€¦ · Peicheng Zhu Basque Center for ......

An Introduction to Matched Asymptotic Expansions(A Draft)
Peicheng Zhu
Basque Center for Applied Mathematics andIkerbasque Foundation for Science
Nov., 2009

Preface
The Basque Center for Applied Mathematics (BCAM) is a newly foundedinstitute which is one of the Basque Excellence Research Centers (BERC).BCAM hosts a series of training courses on advanced aspects of applied andcomputational mathematics that are delivered, in English, mainly to graduatestudents and postdoctoral researchers, who are in or from out of BCAM. Theseries starts in October 2009 and finishes in June 2010. Each of the 13 coursesis taught in one week, and has duration of 10 hours.
This is the lecture notes of the course on Asymptotic Analysis that I gaveat BCAM from Nov. 9 to Nov. 13, 2009, as the second of this series of BCAMcourses. Here I have added some preliminary materials. In this course, weintroduce some basic ideas in the theory of asymptotic analysis. Asymptoticanalysis is an important branch of applied mathematics and has a broad rangeof contents. Thus due to the time limitation, I concentrate mainly on themethod of matched asymptotic expansions. Firstly some simple examples,ranging from algebraic equations to partial differential equations, are discussedto give the reader a picture of the method of asymptotic expansions. Then weend with an application of this method to an optimal control problem, whichis concerned with vanishing viscosity method and alternating descent methodfor optimal control of scalar conservation laws in presence of noninteractingshocks.
Thanks go to the active listeners for their questions and discussions, fromwhich I have learnt many things. The empirical law that the best way forlearning is through teaching always works to me.
Peicheng ZhuBilbao, Spain
Jan., 2010
i

ii

Contents
Preface i
1 Introduction 1
2 Algebraic equations 7
2.1 Regular perturbation . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Iterative method . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Singular perturbation . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.1 Rescaling . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Noningeral powers . . . . . . . . . . . . . . . . . . . . . . . . . 15
3 Ordinary differential equations 17
3.1 First order ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.1.1 Regular . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.1.2 Singular . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.1.2.1 Outer expansions . . . . . . . . . . . . . . . . . 21
3.1.2.2 Inner expansions . . . . . . . . . . . . . . . . . 22
3.1.2.3 Matched asymptotic expansions . . . . . . . . . 23
3.2 Second order ODEs and boundary layers . . . . . . . . . . . . . 27
3.2.1 Outer expansions . . . . . . . . . . . . . . . . . . . . . . 29
3.2.2 Inner expansions . . . . . . . . . . . . . . . . . . . . . . 30
3.2.2.1 Rescaling . . . . . . . . . . . . . . . . . . . . . 30
3.2.2.2 Partly determined inner expansions . . . . . . . 32
3.2.3 Matching conditions . . . . . . . . . . . . . . . . . . . . 33
3.2.3.1 Matching by expansions . . . . . . . . . . . . . 33
3.2.3.2 Van Dykes rule for matching . . . . . . . . . . 34
3.2.4 Examples that the matching condition does not exist . . 37
3.2.5 Matched asymptotic expansions . . . . . . . . . . . . . . 37
3.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
iii

iv CONTENTS
4 Partial differential equations 414.1 Regular problem . . . . . . . . . . . . . . . . . . . . . . . . . . 414.2 Conservation laws and vanishing viscosity method . . . . . . . . 42
4.2.1 Construction of approximate solutions . . . . . . . . . . 434.2.1.1 Outer and inner expansions . . . . . . . . . . . 434.2.1.2 Matching conditions and approximations . . . . 44
4.2.2 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . 454.3 Convergence of the approximate solutions . . . . . . . . . . . . 45
4.3.1 The equations are satisfied asymptotically . . . . . . . . 464.3.2 Proof of the convergence . . . . . . . . . . . . . . . . . . 52
5 An application to optimal control theory 575.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575.2 Sensitivity analysis: the inviscid case . . . . . . . . . . . . . . . 62
5.2.1 Linearization of the inviscid equation . . . . . . . . . . . 625.2.2 Sensitivity in presence of shocks . . . . . . . . . . . . . . 655.2.3 The method of alternating descent directions: Inviscid
case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675.3 Matched asymptotic expansions and approximate solutions . . . 72
5.3.1 Outer expansions . . . . . . . . . . . . . . . . . . . . . . 735.3.2 Derivation of the interface equations . . . . . . . . . . . 785.3.3 Inner expansions . . . . . . . . . . . . . . . . . . . . . . 815.3.4 Approximate solutions . . . . . . . . . . . . . . . . . . . 84
5.4 Convergence of the approximate solutions . . . . . . . . . . . . 865.4.1 The equations are satisfied asymptotically . . . . . . . . 875.4.2 Proof of the convergence . . . . . . . . . . . . . . . . . . 93
5.5 The method of alternating descent directions: Viscous case . . . 100Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Bibliography 107
Index 110

Chapter 1
Introduction
In the real world, many problems (which arise in applied mathematics, physics,engineering sciences, , also pure mathematics like the theory of numbers)dont have a solution which can be written a simple, exact, explicit formula.Some of them have a complex formula, but we dont know too much aboutsuch a formula.
We now consider some examples. i) The Stirling formula:
n! 2nennn
(1 +O
(1
n
)). (1.0.1)
The Landau symbol the big O and the Du Bois Reymond symbol areused. Note that n! grows very quickly as n and becomes so large thatone can not have any idea about how big it is. But formula (1.0.1) gives us agood estimate of n!.
ii) From Algebra we know that in general there is no explicit solution toan algebraic equation with degree n 5.
iii) Most of problems in the theory of nonlinear ordinary or partial differential equations dont have an exact solution.
And many others.
Why asymptotic? In practice, however, an approximation of a solution tosuch problems is usually enough. Thus the approaches to finding such an approximation is important. There are two main methods. One is numericalapproximation, which is especially powerful after the invention of the computer and is now regarded as the third most import method for scientific research(just after the traditional two: theoretical and experimental methods). Another is analytical approximation with an error which is understandable andcontrollable, in particular, the error could be made smaller by some rational
1

2 CHAPTER 1. INTRODUCTION
procedure. The term analytical approximate solution means that an analytic formula of an approximate solution is found and its difference with theexact solution.
What is asymptotic? Asymptotic analysis is powerful tool for finding analytical approximate solutions to complicated practical problems, which is animportant branch of applied mathematics. In 1886 the establishment of rigorous foundation was done by Poincare and Stieltjes. They published separatelypapers on asymptotic series. Later in 1905, Prandtl published a paper on themotion of a fluid or gas with small viscosity along a body. In the case ofan airfoil moving through air, such problem is described by the NavierStokesequations with large Reynolds number. The method of singular perturbationwas proposed.
History. Of course, the history of asymptotic analysis can be traced backto much earlier than 1886, even to the time when our ancestors studied theproblem, as small as the measure of a rod, or as large as the study of theperturbed orbit of a planet. As we know, when we measure a rod, each measuregives a different value, so nmeasures result in ndifferent values. Which oneshould choose to be the length of this rod? The best approximation to thereal length of the rod is the mean value of these nnumbers, and each of themeasures can be regarded as a perturbation of the mean value.
The Suns gravitational attraction is the main force acting on each planet,but there are much weaker gravitational forces between the planets, whichproduce perturbations of their elliptical orbits; these make small changes ina planets orbital elements with time. The planets which perturb the Earthsorbit most are Venus, Jupiter, and Saturn. These planets and the sun alsoperturb the Moons orbit around the EarthMoon systems center of mass.The use of mathematical series for the orbital elements as functions of timecan accurately describe perturbations of the orbits of solar system bodies forlimited time intervals. For longer intervals, the series must be recalculated.
Today, astronomers use highspeed computers to figure orbits in multiplebody systems such as the solar system. The computers can be programmedto make allowances for the important perturbations on all the orbits of themember bodies. Such calculations have now been made for the Sun and themajor planets over time intervals of up to several tens of millions of years.
As accurately as these calculations can be made, however, the behavior ofcelestial bodies over long periods of time cannot always be determined. Forexample, the perturbation method has so far been unable to determine thestability either of the orbits of individual bodies or of the solar system as awhole for the estimated age of the solar system. Studies of the evolution of

3
the EarthMoon system indicate that the Moons orbit may become unstable,which will make it possible for the Moon to escape into an independent orbitaround the Sun. Recent astronomers have also used the theory of chaos toexplain irregular orbits.
The orbits of artificial satellites of the Earth or other bodies with atmospheres whose orbits come close to their surfaces are very complicated. Theorbits of these satellites are influenced by atmospheric drag, which tends tobring the satellite down into the lower atmosphere, where it is either vaporizedby atmospheric friction or falls to the planets surface. In addition, the shapeof Earth and many other bodies is not perfectly spherical. The bulge thatforms at the equator, due to the planets spinning motion, causes a strongergravitational attraction. When the satellite passes by the equator, it may beslowed enough to pull it to earth.
The above argument tells us that there are many problems with smallperturbations, some of those perturbations can be omitted under suitable assumptions. For instance, we are definitely allowed to omit their influence, onthe orbit of Earth around the Sun, of the gravity from some celestial bodieswhich are so distant from Earth that they can not be observed, or an artificialsatellite.
Main contents. The main contents of asymptotic analysis cover: perturbation method, the method of multiscale expansions, averaging method, WKBJ(Wentzel, Kramers, Brillouin and Jeffreys) approximation, the method of matched asymptotic expansions, asymptotic expansion of integrals, and so on.Thus due to the time limitation, this course is mainly concerned with themethod of matched asymptotic expansions. Firstly we study some simpleexamples arising in algebraic equation, ordinary differential equations, fromwhich we will get key ideas of matched asymptotic expansions, though thoseexamples are simple. Then we shall investigate matched asymptotic expansionsfor partial differential equations and finally take an optimal control problemas an application of the method of asymptotic expansions.
Let us now introduce some notations. D Rd with d N denotes an opensubset in Rd. f, g, h : D R are real continuous functions. We denote a smallquantity by .
The Landau symbols the big O and the little o, and the Du Bois Reymondsymbol will be used.Definition 1.0.1 (asymptotic sequence) A sequence of gauge functions {n(x)},n = 0, 1, 2, , is said to be form an asymptotic sequence as x x0 if, for alln,
n+1(x) = o(n(x)), as x x0.

4 CHAPTER 1. INTRODUCTION
Definition 1.0.2 (asymptotic expansion) If n(x) is an asymptotic sequenceof gauge functions as x x0, we say that
n=1
ann(x), an are constant function,
is an asymptotic expansion (or an asymptotic approximation, or an asymptoticrepresentation) of a function f(x) as x x0, if for each N .
f(x) =N
n=1
ann(x) + o(N(x)), as x x0.
Note that, the last equation means that the remainder is smaller than thelast term included once the difference x x0 is sufficiently small.
For various problems, their solution can be written as f = f(x; ) where is usually a small parameter. Suppose that we have the expansion
f(x; ) N
n=1
an(x)n(), as 0.
If the approximation is asymptotic as 0 for each fixed x, then it is calledvariously a Poincare, classical or straightforward asymptotic approximation.
If a function possesses an asymptotic approximation(expansion) in terms ofan asymptotic sequence, then that approximation is unique for that particularsequence.
Note that the uniqueness is for one given asymptotic sequence. Therefore, asingle function can have many asymptotic expansions, each in terms of differentasymptotic sequence. For instance,
tan() + 133 +
2
155
sin() + 12(sin())3 +
3
8(sin())5
cosh
(2
3
)+
31
270
(cosh
(2
3
))5.
We also should note that the uniqueness is for one given function. Thusmany different functions may share the same asymptotic expansions, becausethey can differ by a quantity smaller than the last term included. For example,as 0
; + exp( 12
) ,

5
and 0+
exp() n=0
n
n!,
exp () + exp
(1
)
n=0
n
n!.
Therefore, the quantities and exp( 1
2
)are equal up to to any order of
, we can regard them as the same in the sense of asymptotic expansions. Soare exp() and exp () + exp
(1
).

6 CHAPTER 1. INTRODUCTION

Chapter 2
Algebraic equations
In this chapter we shall investigate some algebraic equations, which are veryhelpful for establishing a picture of asymptotic analysis in our mind, thoughthose examples are quite simple. Let us consider algebraic equations with asmall positive parameter which is denoted by in what follows.
2.1 Regular perturbation
Consider firstly the following quadratic equation
x2 x 1 = 0. (2.1.1)
Suppose that = 0, equation (2.1.1) becomes
x2 1 = 0. (2.1.2)
From Figure 2.1 we find that the roots of f(x; ) converges to 1 and 1 respectively, and the curves become more and more similar rapidly, as 0. Theblue curve with = 0.025 almost coincides with the green one correspondingto = 0.01.
In fact, it is easy to find the roots x of (2.1.1), for any fixed , which read
x1 =+
2 + 4
2, and x2 =
2 + 4
2. (2.1.3)
Correspondingly, the roots x0 of (2.1.2) are x01,2 = 1.A natural question arises: Does x converge to x0? With the help of formula
(2.1.3), we prove easily that
x1 x01, and x2 x02. (2.1.4)
7

8 CHAPTER 2. ALGEBRAIC EQUATIONS
Figure 2.1: Figures of f(x; ) = x2 x 1 as goes to zero. The values are 0.1, 0.05, 0.025, 0.01 corresponding to black, red, blue and green curves,respectively.
So (2.1.1) is called a regular perturbation of (2.1.2). The perturbation term isx. A perturbation is called singular if it is not regular.
For most of practical problems, however, we dont have the explicit formulaslike (2.1.3). In this case, how can we get the knowledge about the limits asthe small parameter goes to zero?
The method of asymptotic expansions is a powerful tool for such an investigation. Note that x depends on , to construct an asymptotic expansion,we define an ansatz as follows
x = 0(x0 + 1x1 +
2x2 + ), (2.1.5)
here, i (i = 0, 1, 2, ) are constant to be determined, and we assume, withoutloss of generality, that
x0 and x1 differ from zero, and 0 < 1 < 2 < . (2.1.6)
We first determine 0. There are three cases that should be taken intoaccount:Case i) 0 > 0,Case ii) 0 < 0 andCase iii) 0 = 0.

2.1. REGULAR PERTURBATION 9
Next we will show that only case iii) is possible to get an asymptotic expansion. Inserting ansatz (2.1.5) into equation (2.1.1). Balancing both sidesof the resulting equation, we obtain
x20x20 + 220+1x0x1 +
2(0+1)x21 x0x0 0+1x1 1 + = 0. (2.1.7)
Suppose now that case i) happens, i.e. 0 > 0, which implies 0 is thesmallest power. Thus from (2.1.7) it follows that the coefficient of 0 shouldbe zero, namely x0 = 0, this contradicts our assumption that x0 = 0.
For case ii), namely 0 < 0, invoking assumption (2.1.6) we have
20 < 0 < 0 + 1,
thus 20 is the smallest power and the coefficient of 20 should be zero, so
x20 = 0, which violates our assumption too.Therefore, we assert that only case iii), i.e. 0 = 0 is possible, and ansatz
(2.1.5) becomes
x = x0 + 1x1 +
2x2 + , (2.1.8)
moreover, (2.1.7) now is
x20 + 21x0x1 +
21x21 x0 1x1 1 + = 0. (2.1.9)
Similar to the above procedure for deciding 0, we can determine 1, 2etc., which are 1 = 1, 2 = 2, . So ansatz (2.1.5) takes the following form
x = x0 + 1x1 +
2x2 + , (2.1.10)
and the following expansions are obtained
0 : x20 1 = 0, (2.1.11)1 : 2x0x1 x0 = 0, (2.1.12)2 : 2x0x2 + x
21 x1 = 0. (2.1.13)
Solving (2.1.11) we have x0 = 1 or x0 = 1. We take the first case as anexample and construct the asymptotic expansion. The second case is left tothe reader as an example. With x0 = 1 in hand, from (2.1.12) and (2.1.13) weget, respectively,
x1 =1
2, x2 =
1
8.
Up to iterms (i = 1, 2, 3), we expand x as follows
X1 = 1, (2.1.14)
X2 = 1 +
2, (2.1.15)
X3 = 1 +
2+2
8. (2.1.16)

10 CHAPTER 2. ALGEBRAIC EQUATIONS
The next question then arise: How precisely does Xi (i = 1, 2, 3) satisfythe equation of x?
Straightforward computations yield that
(X1)2 X1 1 = O(), (2.1.17)
(X2)2 X2 1 = O(2), (2.1.18)
(X3)2 X3 1 = O(4). (2.1.19)
From which it is easy to see that Xi satisfies very well the equation when issmall, and the error becomes smaller as i is larger, which means that we takemore terms.
This example illustrates how powerful the method of asymptotic expansionsis in finding an analytic approximation without a priori knowledge on thesolution to a problem under consideration.
2.2 Iterative method
In this section we are going to make use the socalled iterative method toconstruct asymptotic expansions again for equation (2.1.1). We rewrite (2.1.1)as
x = 1 + x,
where x = x. This formula suggests us an iterative procedure,
xn+1 =1 + xn (2.2.1)
for any n N. Here we only take the positive root as an example. Let x0 bea fixed real number. One then obtains from (2.2.1) that
x1 = 1 +
2x0 + , (2.2.2)
so we find the first term of an asymptotic expansion, however the secondterm in (2.2.2) still depends on x0. To get the second term of an asymptoticexpansion, we iterate once again and arrive at
x2 = 1 +
2x1 + = 1 +
2+ , (2.2.3)
this gives us the desired result. After iterating twice, we then construct anasymptotic expansion:
x = 1 +
2+ . (2.2.4)

2.3. SINGULAR PERTURBATION 11
Calculating the derivative of the function
x 7 f(x) := 11 + x
,
we have
f (x) =12 1 + x
2 ,from this it follows easily that the iteration defined by (2.2.1) converges, since is a small number.
The shortage of this method for constructing an asymptotic expansion isthat we dont have an explicit formula, like (2.2.1), which guarantees the iteration converges.
2.3 Singular perturbation
Now we start to investigate the following equation, which will give us verydifferent results, compared with the previous section.
x2 x 1 = 0. (2.3.1)
Suppose that = 0, equation (2.3.1) becomes
x 1 = 0. (2.3.2)
Therefore we see that one root of (2.3.1) disappears as becomes 0. Thisis very different from (2.1.1). It is not difficult to get the roots of (2.3.1) whichread
x =1
1 + 4
2.
There hold, as 0, that
x =1
1 + 4
2 1,
and by a Taylor expansion,
x+ =1 +
1 + 4
2
=1
2
(1 + 1 + 2 22 +
)=
1
+ 1 2+ . (2.3.3)

12 CHAPTER 2. ALGEBRAIC EQUATIONS
Figure 2.2: This is a figure of f(x; ) as goes to zero. The values corresponding to black, red, blue, green curves (from left to right in the right half plane)are 1, 0.1, 0.05, 0.025, and 1/ + 1 2, 11, 21, 41 close to the correspondingpositive roots, respectively.
We plot the function f(x; ) := x2 x 1 in Figure 2.2, which shows thechanges of the two zeroes of f(x; ) with the small parameter .
Therefore we can see that as goes to zero, the smaller root i.e. xconverges to 1 (which is the root of the limit equation (2.3.2)), while thebigger one blows up rapidly at the rate 1
. Thus we can not expect that an
asymptotic expansion like we did for a regular problem is valid too in thiscase. How to find a suitable scale for such a problem? We shall make use ofthe rescaling technique.
2.3.1 Rescaling
Suppose that we dont know a priori the correct scale for constructing anasymptotic expansion, then the rescaling technique helps us to find it. Thissubsection is concerned with it, and we take (2.3.1) as an example.
Let be a real function in , and let
x = X,
where = () and X = O(1). A rescaling technique is to determine thefunction , consequently, a new variable X is found.

2.3. SINGULAR PERTURBATION 13
Rewriting (2.3.1) in X, we have
2X2 X 1 = 0. (2.3.4)
By comparing the coefficients of (2.3.4), namely,
2, , 1,
we divide the rescaling argument into five cases.
Case i) > 1. Multiplying (2.3.4) by 12 yields
X2 ()1X o(1)
12o(1)
= 0, (2.3.9)

14 CHAPTER 2. ALGEBRAIC EQUATIONS
and X = o(1). So this is not a suitable scale either.
In conclusion, the suitable scale is = 1, thus
x =X
.
Equation (2.3.4) is changed to
X2 X = 0. (2.3.10)
A singular problem is reduced to a regular one.
We now turn back to equation (2.3.1). Rescaling suggests us to use thefollowing ansatz:
x = 1x1 + x0 + x1 + . (2.3.11)
Inserting into (2.3.1) and comparing the coefficients of 1 (i = 1, 0, 1, )on both sides of (2.3.1), we obtain
1 : x21 x1 = 0, (2.3.12)0 : 2x0x1 x0 1 = 0, (2.3.13)1 : x20 + 2x1x1 x1 = 0. (2.3.14)
The roots of (2.3.12) are x1 = 1 and x1 = 0. The second root does not yielda singular asymptotic expansion, thus it can be excluded easily. We considernow x1 = 1. From (2.3.13) and (2.3.14) one solves
x0 = 1, x1 = 1.
Therefore, we construct expansions Xi , up to i + 1terms(i = 0, 1, 2), ofthe root x+ by
X2 =1
+ 1 , (2.3.15)
X1 =1
+ 1, (2.3.16)
and
X0 =1
. (2.3.17)

2.4. NONINGERAL POWERS 15
How precisely do they satisfy equation (2.3.1)? Computations yield
(X0)2 X0 1 = 1,
correspondingly, x+ X0 = 1 2 + o() 1 = 0. This means that anexpansion with one term is not a good approximation.
(X1)2 X1 1 = ,
and(X2)
2 X2 1 = O(2),
meanwhile, we have the following estimates
x+ X0 = O(1),x+ X1 = O(),x+ X2 = O(2).
Thus X1 , X2 are a good approximation to x
+, however X
0 is not a good
one. Moreover, we can conclude that the more terms we take, the more precisethe approximation is. This is not always true as we shall see in Chapter 2,the example of ordinary differential equation of second order. We also figureout the profile approximately of x+, in other word, we know now how the rootdisappears by blowing up as goes to zero.
2.4 Noningeral powers
Any of the asymptotic expansions in Sections 1.1 and 1.2 are a series withintegral powers. However this is in general not true. Here we give an example.Consider
(1 )x2 2x+ 1 = 0. (2.4.1)
Define, as in previous sections, an ansatz as follows
x = x0 + x1 + x2 + , (2.4.2)
inserting (2.4.2) into (2.4.1) and balancing both sides yield
0 : x20 2x0 + 1 = 0, (2.4.3)0 : 2x0x1 2x1 x20 = 0, (2.4.4)1 : 2x0x2 2x2 + x21 2x0x1 = 0. (2.4.5)

16 CHAPTER 2. ALGEBRAIC EQUATIONS
From (2.4.3) one gets x0 = 1, whence (2.4.4) implies 2x1 2x1 12 = 0,that is 1 = 0, a contradiction. So (2.4.2) is not welldefined.
Now we define an ansatz as
x = x0 + x1 +
x2 + , (2.4.6)
where 0 < < < are constants to be determined. inserting this ansatzinto (2.4.1) and balancing both sides, we obtain there must hold that
=1
2, = 1, .
and the correct ansatz is
x = x0 + 12x1 + x2 +
32x3 + . (2.4.7)
The remaining part of construction is similar to previous ones, just by introducing simply =
12 . We have
x = 1 12 +
32 + .

Chapter 3
Ordinary differential equations
This chapter is concerned with the study of ordinary differential equation andwe start with some definitions. Consider
L0[u] + L1[u] = f0 + f1, in D. (3.0.1)
and the associated equation corresponding to the case that = 0
L0[u] = f0, in D. (3.0.2)
Here, L0, L1 are known operators, either ordinary or partial; f0, f1 are givenfunctions. The terms L1[u] and f1 are called perturbations.
E (E0, respectively) denotes the problem consisting equation (3.0.1) (equation (3.0.2)) and suitable boundary/initial conditions. The solution to problemE (E0, respectively) is denoted by u
(u0).
Definition 3.0.1 Problem E is regular if
u u0D 0,
as 0. Otherwise, Problem E is referred to a singular one. Here D isa suitable norm over domain D.
It is easy to expect that whether a problem is regular or not, this dependson the choice of the norm, which can be clarified by the following problem.
Example 3.0.1 Let D = (0, 1) and : D R be a real function, which is asolution to
d2
dx2+d
dx= 0, in D, (3.0.3)
x=0 = 0,x=1 = 1. (3.0.4)
17

18 CHAPTER 3. ORDINARY DIFFERENTIAL EQUATIONS
The solution is
= (x; ) =1 ex1 e 1
,
which is monotone increasing.
Now we define two norms. i) D = maxD , then problem (3.0.3) (3.0.4) is singular since 0D = 1 where 0 = 0 or 0 = 1.
ii) Define
D =(
D
2) 1
2
,
choose 0 = 1 which satisfies ddx
= 0, then we can prove easily that 0D 0 as 0, whence problem (3.0.3) (3.0.4) is regular.
In what follows, we restrict ourself to consider only themaximum norm, andin the remaining part of this chapter the domain D is defined by D = (0, 1).
3.1 First order ODEs
This section is devoted to discussion matched asymptotic expansions in thecontext of ordinary differential equations, and we will get new ideas which cannot be expected in the examples of algebraic equations since those are toosimple. Both regular and singular problems will be studied.
3.1.1 Regular
In this subsection we first study a regular problem of ordinary differentialequations of first order. Consider the following problem
du
dx+ u = x, in D, (3.1.1)
u(0) = 1, (3.1.2)
and its associated problem
du
dx+ u = 0, in D, (3.1.3)
u(0) = 1. (3.1.4)
They are simple problems, however we shall see later that they give us almostall features of matched asymptotic expansions.

3.1. FIRST ORDER ODES 19
We can solve easily problems (3.1.1) (3.1.2) and (3.1.3) (3.1.4) whosesolution reads
u(x) = (1 + )ex + (x 1), (3.1.5)u0(x) = ex. (3.1.6)
Calculating the difference of these two solutions yields
u u0D = maxD
ex + x 1 0
as 0.Therefore, problem (3.1.1) (3.1.4) is regular, and the term x is a regular
perturbation.
But, in general, one can not expect that there are the explicit simple formulas, like (3.1.5) and (3.1.6), of exact solutions. Thus we next deal with thisproblem in a general way, and employ the method of asymptotic expansions.To this end, we define an ansatz
u(x) = u0(x) + u1(x) + 2u2(x) + . (3.1.7)
Then inserting it into equation (3.1.1) and comparing the coefficients of i inboth sides of the resulting equation, we get
0 : u0 + u0 = 0, (3.1.8)
0 : u1 + u1 = x, (3.1.9)
1 : u2 + u2 = 0. (3.1.10)
We need to derive the initial conditions for ui. The condition for u0 followsfrom (3.1.4) and ansatz (3.1.7). In fact, there holds
1 = u(0) = u0(0) + u1(0) + 2u2(0) + u0(0), (3.1.11)
thus, u0(0) = 1. With this in hand, we use again (3.1.11) to derive the condition for u1. We obtain
0 = u1(0) + 2u2(0) + , whence
0 = u1(0) + u2(0) + (3.1.12)
Letting 0 we get u1(0) = 0. In a similar manner, we find the conditionfor u2. Thus the conditions should be met
u0(0) = 1, (3.1.13)
u1(0) = 0, (3.1.14)
u2(0) = 0. (3.1.15)

20 CHAPTER 3. ORDINARY DIFFERENTIAL EQUATIONS
Solving equations (3.1.8), (3.1.9) and (3.1.10) with conditions (3.1.13),(3.1.14) and (3.1.15), respectively, we obtain
u0(x) = ex, (3.1.16)
u1(x) = x 1 + ex, (3.1.17)u2(x) = 0. (3.1.18)
Thus the approximations can be constructed:
U 0 (x) = ex, (3.1.19)
U 1 (x) = U2 (x) (3.1.20)
= (1 + )ex + (x 1). (3.1.21)
Simple calculations show that U 0 (x) satisfies (3.1.1) with a small error
(U 0 (x)) + U 0 (x) x = O(),
and condition (3.1.4) is satisfied exactly. Note that U 1 (x) = U2 (x) are equal
to the exact solution (3.1.5), so they solve problem (3.1.1) (3.1.4), and are aperfect approximation.
3.1.2 Singular
Now we are going to study singular perturbation and boundary layers. Theperturbed problem is
du
dx+ u = x, in D, (3.1.22)
u(0) = 1, (3.1.23)
and the associated one is
u(x) = x, in D, (3.1.24)
u(0) = 1, (3.1.25)
from which one can easily see the condition (3.1.25) can not be satisfied, thusa boundary layer happens at x = 0. The exact solutions of problem (3.1.22) (3.1.23) is
u(x) = (1 + )ex + x , (3.1.26)
Let u0(x) = x. Computation yields
u u0D = maxD
(1 + )ex  = 1,

3.1. FIRST ORDER ODES 21
for any positive . Therefore, by definition, problem (3.1.22) (3.1.23) issingular, and the term du
dxis a singular perturbation.
We next want to employ the method of asymptotic expansions to studythis singular problem, for such a problem at least one boundary layer arisesand a matched asymptotic expansion is suitable for it. We will constructouter and inner expansions which are valid (by this word we mean that anexpansion satisfies well the corresponding equation) in the socalled outer andinner regions, respectively. Then we derive matching conditions which areenable us to establish an asymptotic expansion which is valid uniformly in thewhole domain.
We now explain the regions in Figure 3.1, the boundary layer or innerregion occurs near x = 0, it is very thin, the O() part, see Figure 3.1. Anouter region is the subset of (0, 1) consisting of the points far from the boundarylayer, which the O(1) part, the largest part and usually occupies most part ofthe domain under consideration. Therefore as increases to the size of O(1),the inner regime passes to the outer one, from this we see that there is anintermediate (or, matching, overlapping) region between them, the scale ofthis region is of O() where (0, 1), in the present problem. This region isvery large compared with the inner one, however is very small compared withthe outer one.
We now turn to the construction of asymptotic expansions, and start withouter expansions.
3.1.2.1 Outer expansions
The ansatz for deriving outer expansion is just of the form of a regular expansion:
u(x) = u0(x) + u1(x) + 2u2(x) + , (3.1.27)
Similar to the approach for asymptotic expansion of the regular problem(3.1.1) (3.1.4), we obtain
0 : u0(x) = x, (3.1.28)
1 : u1(x) + u0(x) = 0, (3.1.29)
2 : u2(x) + u1(x) = 0. (3.1.30)
Solving the above problems yields
u0(x) = x, (3.1.31)
u1(x) = 1, (3.1.32)u2(x) = 0. (3.1.33)

22 CHAPTER 3. ORDINARY DIFFERENTIAL EQUATIONS
O(): Inner region O(1):Outer region
O()
: Intermediate region
Figure 3.1: Regions: Inner, intermediate and outer, where (0, 1), aconstant.
Then we get approximations:
O2(x) = x . (3.1.34)
Moreover, from (3.1.31) we obtain that u0(0) = 0 which differs from thegiven condition u0(0) = 1, thus there is a boundary layer appearing at x = 0.
3.1.2.2 Inner expansions
To construct inner expansions, we introduce a new variable, the socalled fastvariable:
=x
,
On one hand, for a beginner to the theory of asymptotic analysis, it maybe not easy to understand why we define in this form that the power of is

3.1. FIRST ORDER ODES 23
1? To convince oneself, one may assume a more general form as = x
with R. Then repeating the procedure we will carry out later in next subsection,we prove that must be equal to 1 in order to get an asymptotic expansion.On the other hand we already assume that a boundary layer occurs at x = 0.If the assumption is incorrect, the procedure will break down when you tryto match the inner and outer expansions in the intermediate region. At thispoint one may assume that there exists a boundary layer near a point x = x0.The following analysis is the same, except that the scale transformation in theboundary layer is = xx0
. We shall carry out the analysis for determining
in the next section.An inner expansion is an expansion in terms of , we assume that
u(x) = U0() + U1() + 2U2() + . (3.1.35)
It is easy to compute that for i = 0, 1, 2, ,
dUi()
dx=
1
dUi()
d.
Invoking equation (3.1.1) we arrive at
1 : U 0 + U0 = 0, (3.1.36)
0 : U 1 + U1 = , (3.1.37)
1 : U 2 + U2 = 0. (3.1.38)
From which we have
U0() = C0e, (3.1.39)
U1() = C1e + 1, (3.1.40)
U2() = C2e. (3.1.41)
The second step is to determine the constants Ci with i = 0, 1, 2. To thisend, we use the condition at x = 0 which implies that = 0 too to concludethat U0(0) = 1, thus C0 = 1. Similarly we have C1 = 1, C2 = 0. Therefore,an inner expansion can be obtained
I2() = (1 + )e + ( 1). (3.1.42)
3.1.2.3 Matched asymptotic expansions
Here we shall discuss two main approaches to construct a uniform asymptoticexpansion by combining together the inner and outer expansions. The firstone is to take the sum of the inner expansion (3.1.42) and the outer expansion

24 CHAPTER 3. ORDINARY DIFFERENTIAL EQUATIONS
(3.1.34), then subtract their common part which is valid in the intermediateregion.
To get a matched asymptotic expansion, it remains to find the commonpart. Assume that there exists a uniform asymptotic expansion, say U 2 , thenthere holds
limx outer region, x0
U 2 (x) = limx0
O2(x) (3.1.43)
limx inner region, x0
U 2 (x) = limx0
I2(x
), (3.1.44)
thus, there must hold
U0() + U1() + 2U2() = u0(x) + u1(x) +
2u2(x) +O(3).
Following Fife [14], we rewrite x = and expand the right hand side in termsof . There hold
U0() + U1() + 2U2()
= u0() + u1() + 2u2() +O(
3)
= u0(0) + u0(0) +
1
2u0(0)()
2
+(u1(0) + u1(0)) +
2u2(0) +O(3)
= u0(0) + (u0(0) + u1(0))
+2(1
2u0(0)
2 + u1(0)) + u2(0)
)+O(3). (3.1.45)
Therefore we obtain the following matching conditions
U0() = u0(0) = 0, (3.1.46)
U1() u0(0) + u1(0) = 1, (3.1.47)
U2() 1
2u0(0)
2 + u1(0) + u2(0) = 0, (3.1.48)
for .The common parts, respectively, are
i) if we take the first one term
Common part0 = 0,
ii) if we take the first two terms,
Common part1 = ( 1).

3.1. FIRST ORDER ODES 25
iii) if we take the first three terms,
Common part2 = ( 1).
The matched asymptotic expansion then is
U 2 (x) = I2() +O
2(x) Common part2
= (1 + )e + ( 1) + (x ) ( 1)= (1 + )e + x . (3.1.49)
So U 2 (x) is just the exact solution to problem (3.1.22) (3.1.23). Similarly, wealso can construct an asymptotic expansion up to one, two terms which readrespectively,
U 0 (x) = I0() +O
0(x) Common part0 = e + x,
and
U 1 (x) = I1() +O
1(x) Common part1 = U 2 (x).
We thus can draw the graphs (see Figure 3.2) of u, u0, O2, U2 and I
2 , re
membering that u = U 2 = I2 , one needs only to plot the first three functions.
The second method for constructing a matched asymptotic expansion frominner and outer expansions is to make use of a suitable cutoff function to forma linear combination of inner and outer expansions.
We define a function = () : R+ R+ which is smooth such that
() =
{1 if 1,0 if 2,
(3.1.50)
and 0 () 1 if [1, 2]. And let
(x) = (x), (3.1.51)
which is easily seen that
supp() [0, 2], supp() [, 2].
Here (0, 1) is a fixed number. , are plotted in 3.3.Now we are able to define an approximation by
U 2 (x) = (1 (x))O2(x) + (x)I2(x
). (3.1.52)

26 CHAPTER 3. ORDINARY DIFFERENTIAL EQUATIONS
Figure 3.2: Pictures of u, u0, U 2 with = 0.1 corresponding respectively, toblack, blue, red curves
By this method, we dont need to find the common part and the argumentis simpler, however, the price we should pay is that U 2 (x) does not satisfyequation (3.1.22) precisely any more. Instead, an error occurs:
dU 2 (x)
dx+ U 2 (x) x = (x))(I2()O2(x))1
= O(1). (3.1.53)
What is the relation between the two approximations? Let us rewrite(3.1.52) as
U 2 (x) = O2(x) + I
2(x
)
((1 (x))I2(
x
) + (x)O
2(x)
)= O2(x) + I
2(x
) R. (3.1.54)
where
R = (1 (x))I2(x
) + (x)O
2(x). (3.1.55)
It is easy to guess that R is an approximation of the common part we foundin the first method. What is the difference of R and the common part? The

3.2. SECOND ORDER ODES AND BOUNDARY LAYERS 27
O 2
1
(x)
x
(x)
O 2 x
Figure 3.3: Supports of and .
support of R is a subset of [, 2], the intermediate region! Thus we canexpect that O2 and I
2 coincide in this region and they are equal approximately
to the common part (say A), so R = (1 (x))A+ (x)A = A.
3.2 Second order ODEs and boundary layers
The previous examples have given us some ideas about the method of asymptotic expansions. However, they can not outline all the features of this methodsince they are really too simple. In this section, we are going to study amore complex problem which possesses all aspects of asymptotic expansions.Consider the problem for second order ordinary differential equation:
d2u
dx2+ (1 + )
du
dx+ u = 0, in D. (3.2.1)
u(0) = 0, u(1) = 1, (3.2.2)
and the associated problem with = 0 is
du
dx+ u = 0, in D. (3.2.3)
u(0) = 0, (3.2.4)
u(1) = 1. (3.2.5)
Noting that (3.2.3) is of first order, thus we conclude that only one of the twoconditions (3.2.4) and (3.2.5) can be satisfied, though we still wrote the twoconditions here.

28 CHAPTER 3. ORDINARY DIFFERENTIAL EQUATIONS
First of all, we find the exact solutions of the above problems, which are,respectively, as follows
u(x) =ex exe1 e 1
{
0, if x = 0;
e1x, if x > 0,
(as 0), and
u0(x) =
{0, if u(0) = 0;
e1x, if u(1) = 1.
Therefore, for both cases, there is a boundary layer occurs at x = 0 orx = 1, since {
u(1) u0(1) = 1, if u0(x) = 0;u(0) u0(0) = e, if u0(x) = e1x.
Hence, problem (3.2.1) (3.2.2) can not be regular. We explain this. Set
u(x) = u0(x) + u1(x) + 2u2(x) + . (3.2.6)
Inserting into equation (3.2.1) and comparing the coefficients of i in bothsides, one has
du0dx
+ u0 = 0, (3.2.7)
u0(0) = 0, (3.2.8)
u0(1) = 1. (3.2.9)
Then we further get
du1dx
+ u1 +d2u0dx2
+du0dx
= 0, (3.2.10)
u1(0) = 0. (3.2.11)
du2dx
+ u2 +d2u1dx2
+du1dx
= 0, (3.2.12)
u2(0) = 0. (3.2.13)
Since equation (3.2.7) is of first order, only one of conditions (3.2.8) and(3.2.9) can be satisfied. The solution to (3.2.7) is
u0(x) = C0ex.
We shall see that even we require u0(x) meets only one of (3.2.8) and (3.2.9),there still is no asymptotic expansion of the form (3.2.6). There are two cases.

3.2. SECOND ORDER ODES AND BOUNDARY LAYERS 29
Case i) Suppose that the condition at x = 0 is satisfied (we dont care at thismoment another condition), then C0 = 0, hence
u0(x) = 0.
Solving problems (3.2.10) (3.2.11) and (3.2.12) (3.2.13) one has
u1(x) = u2(x) = 0.
Thus no asymptotic expansion can be found.
Case ii) Assume that the condition at x = 1 is satisfied, then C0 = e, andu0(x) = e
1x. Consequently, equations (3.2.10) and (3.2.12) become
du1dx
+ u1 = 0, (3.2.14)
du2dx
+ u2 = 0. (3.2.15)
Hence,u1(x) = C1e
x, u2(x) = C2ex.
But from conditions u1(1) = u2(1) = 0 which can be derived from ansatz(3.2.6), it follows that C1 = C2 = 0, whence
u1(x) = u2(x) = 0,
and the possible asymptotic expansion is
U i (x) = e1x
for any i = 0, 1, 2, .Note that U i (0) u(0) = e 0. Problem (3.2.1) (3.2.2) is thus
singular.
3.2.1 Outer expansions
This subsection is concerned with outer expansions. We begin with the definition of an ansatz
u(x) = u0() + u1() + 2u2() + . (3.2.16)
For simplicity of notations, we will denote the derivative of a onevariablefunction by , namely, f (x) = df
dx, f () = df
d, etc. Inserting (3.2.16) into
equation (3.2.1) and equating the coefficients of i of both sides yield
0 : u0 + u0 = 0, u0(1) = 1, (3.2.17)
1 : u1 + u1 + u0 + u
0 = 0, u1(1) = 0, (3.2.18)
2 : u2 + u2 + u1 + u
1 = 0, u2(1) = 0. (3.2.19)

30 CHAPTER 3. ORDINARY DIFFERENTIAL EQUATIONS
The solutions to (3.2.17), (3.2.18) and (3.2.19) are respectively,
u0(x) = e1x, u1(x) = u2(x) = 0.
Thus outer approximations (up to i+1terms) can be constructed as follows
Oi (x) = e1x, (3.2.20)
here i = 0, 1, 2.
3.2.2 Inner expansions
The construction of an inner expansion is more complicated than that foran outer expansion. Firstly a correct scale should be decided, by using therescaling technique which has been used for studying singular perturbation ofan algebraic equation in Chapter 2.
3.2.2.1 Rescaling
Introduce a new variable
=x
, (3.2.21)
where is a function in , we write = (). In what follows, we shall provethat in order to get an inner expansion which matches well the outer expansion, should be very small, so changes very rapidly as x changes, and is hencecalled a fast variable.
The first goal of this subsection is to find a correct formula of . Rewritingequation (3.1.1) in terms of gives
2d2U
d2+
1 +
dU
d+ U = 0. (3.2.22)
To investigate the relation of the coefficients of (3.2.22), i.e.
2,
1 +
, 1,
we divide the rescaling argument into five cases. Note that
1 +
1
since

3.2. SECOND ORDER ODES AND BOUNDARY LAYERS 31
Case i) >> 1. Recalling that

32 CHAPTER 3. ORDINARY DIFFERENTIAL EQUATIONS
3.2.2.2 Partly determined inner expansions
Now we turn back to construction of inner expansions. From rescaling we candefine
=x
, (3.2.27)
and an ansatz as follows
u(x) = U0() + U1() + 2U2() + . (3.2.28)
It is easy to compute that for i = 0, 1, 2, ,
dUi()
dx=
1
dUi()
d,d2Ui()
dx2=
1
2d2Ui()
d2.
Then inserting them into equation (3.1.1) and balancing both sides of theresulting equation we arrive at
1 : U 0 + U0 = 0, (3.2.29)
0 : U 1 + U1 + U
0 + U0 = 0, (3.2.30)
1 : U 2 + U2 + U
1 + U1 = 0. (3.2.31)
From which we obtain general solutions
U0() = C01e + C02, (3.2.32)
U1() = C11e + C12 C02, (3.2.33)
U2() = C21e + C22 +
C0222 C12. (3.2.34)
Here Cij with i = 0, 1, 2; j = 1, 2 are constants. Next step is to determinethese constants. To this end, we use the condition at x = 0, which impliesthat = 0 too, to conclude that
U0(0) = 0, U1(0) = 0 and U2(0) = 0,
thus
Ci1 = Ci2 =: Ai, (3.2.35)
for i = 0, 1, 2. Hence, (3.2.32) (3.2.34) are reduced to
U0() = A0(e 1), (3.2.36)
U1() = A1(e 1) + A0, (3.2.37)
U2() = A2(e 1) A0
22 + A1. (3.2.38)

3.2. SECOND ORDER ODES AND BOUNDARY LAYERS 33
Therefore we still need to find constants Ai. For this purpose we need thesocalled matching conditions. An inner region is near the boundary layer andis usually very thin, is of O() in the present problem, while an outer regionis far from the boundary layer and is of O(1) (thus it is very large regioncompared with boundary layer). Therefore as increases to the size of O(1),the inner regime passes to the outer one, from this we see that there is anintermediate (or, matching, overlapping) region between them, the scale ofthis region is of O() where (0, 1), in the present problem.
Inner and outer expansions are valid, respectively, over inner and outerregions. Our purpose is to establish an approximation, which is valid in thewhole domain, so it is called a uniform approximation. Thus it is natural torequire inner and outer expansions coincide in the intermediate region, andsome common conditions must be satisfied in that region by inner and outerexpansions. Those common conditions are the socalled matching conditions.The task of the next subsection is to find such conditions.
3.2.3 Matching conditions
Now we expect reasonably that the inner expansions coincide with the outerones in the intermediate region, and write
U0() + U1() + 2U2() = u0(x) + u1(x) +
2u2(x) +O(3).
We shall employ two main methods to derive matching conditions.
3.2.3.1 Matching by expansions
(Relation with the intermediate variable method? ) Following again the method used in Fifes book [14], we rewrite x = and expand the right handside in terms of . We then obtain the matching conditions
U0() u0(0) = e, (3.2.39)U1() u0(0) + u1(0) = e, (3.2.40)
U2() 1
2u0(0)
2 + u1(0) + u2(0) =e
22, (3.2.41)
for . From (3.2.36) it follows that
U0() A0,
as . Combination with (3.2.39) yields
A0 = e. (3.2.42)

34 CHAPTER 3. ORDINARY DIFFERENTIAL EQUATIONS
Hence, (3.2.36) (3.2.38) are now
U0() = e(e 1), (3.2.43)U1() = A1(e
1) e, (3.2.44)U2() = A2(e
1) + e22 + A1. (3.2.45)
So the leading term of inner expansion is obtained. Comparing (3.2.40) with(3.2.44) for large we have
A1 = 0. (3.2.46)
In a similar manner, from (3.2.41) with (3.2.45) one gets
A2 = 0. (3.2.47)
Therefore, the first three term of the inner expansion are determined, whichread
U0() = e(1 e), (3.2.48)U1() = e, (3.2.49)U2() =
e
22. (3.2.50)
Using these functions, we define approximations up to i+ 1terms (i = 0, 1, 2)as follows
I0() = e(1 e), (3.2.51)I1() = e(1 e) e, (3.2.52)I2() = e(1 e) e + 2
e
22. (3.2.53)
Now we plot in Figure 3.4 the exact solution, the inner and outer expansions(up to one term).
3.2.3.2 Van Dykes rule for matching
Matching with an intermediate variable can be tiresome. The following VanDykes rule [34] for matching usually works and is more convenient.
For a function f , we have corresponding inner and outer expansions whichare denoted respectively, f =
n
nfn(x) and f =
n ngn(). We define

3.2. SECOND ORDER ODES AND BOUNDARY LAYERS 35
Figure 3.4: Pictures of u, O0, I0 with = 0.1 corresponding respectively, to
red, black, blue curves
Definition 3.2.2 Let P,Q be nonnegative integers.
EPf = outer limit(x fixed 0) retainingP + 1 terms of an outer expansion
=P
n=0
nfn(x), (3.2.54)
and
HQf = inner limit( fixed 0) retainingQ+ 1 terms of an inner expansion
=
Qn=0
ngn(), (3.2.55)
Then the Van Dyke matching rule can be stated as
EPHQf = HQEPf.

36 CHAPTER 3. ORDINARY DIFFERENTIAL EQUATIONS
Example 3.2.1 Let P = Q = 0. For our problem in this section, we definef = u, and H0g := A0(e
1), E0f := e1x. Then
E0H0g = E0{A0(e 1)}= E0{A0(ex/ 1)}= A0. (3.2.56)
and
H0E0f = H0{e1x}= H0{e1}= e. (3.2.57)
By the Van Dyke rule, (3.2.56) must coincide with (3.2.57), and we obtainagain
A0 = e,
which is (3.2.42).We can also derive the matching conditions of higher order, see the following
example:Example 3.2.2 Let P = Q = 1. For our problem in this section, we definef = u, and H1g := A0(e
1) + (A1(e 1) e), E1f := e1x. Then
E1H1g = E1{A0(e 1) + (A1(e 1) e)}= E1{A0(ex/ 1) + (A1(ex/ 1) ex/)}= A0 ex+ A1, (3.2.58)
and
H1E1f = H1{e1x}= H1{e1}= H1{e(1 +O(2))}= e(1 ). (3.2.59)
By the Van Dyke rule, (3.2.58) must coincide with (3.2.59), and hence
A0 = e, A1 = 0.
which are (3.2.42) and (3.2.46).
However, for some problems, the Van Dyke rule for matching does notwork. Some examples are given in the next subsection.

3.2. SECOND ORDER ODES AND BOUNDARY LAYERS 37
3.2.4 Examples that the matching condition does notexist
Consider the following terrible problem: The function like
ln(1 + ln r/ ln1
).
For this function Van Dykes rule fails for all values of P and Q. The intermediate variable method of matching also struggles, because the leading orderpart of an infinite number of terms must be calculated before the matching canbe made successfully. It is unusual to find such a difficult problem in practice.See the book by Hinch [22].
3.2.5 Matched asymptotic expansions
In this subsection we shall make use of the inner and outer expansions toconstruct approximations. Also we do it in two ways.
i) The first method: Adding inner and outer expansions, then subtractingthe common part, we obtain
U 0 (x) = e1x + e(1 e) e = e(ex e
x )
U 1 (x) = e(ex e
x ) ex (e) = e(ex e
x )
U 2 (x) = e(ex e
x ) +
1
2ex2 1
2e()2 = e(ex e
x ). (3.2.60)
From which one asserts that
U 0 (x) = U1 (x) = U
2 (x).
Therefore, more terms we take for an approximation, but the accuracyof this approximation is not more greater! This is a phenomenon which isdifferent from what we have had for algebraic equations.
ii) The second method: Employing the cutoff function defined in previoussubsection. Then we get
U i (x) = (1 (x))Oi (x) + (x)Ii (x
). (3.2.61)
Here, i = 0, 1, 2.

38 CHAPTER 3. ORDINARY DIFFERENTIAL EQUATIONS
3.3 Examples
The following examples will help us to clarify some ideas about the method ofasymptotic expansions.
Example 3.3.1 The problems we discussed have only one boundary layer.However, in some singular perturbation problems, more than one boundarylayer can occur. This is exemplified by
d2u
dx2 u = A, in D. (3.3.1)
u(0) = , (3.3.2)
u(1) = . (3.3.3)
Here A = 0, = 0.Example 3.3.2 A problem can be singular although does not multiply thehighest order derivative of the equation. A simple example is the following:
2u
x2 u
y= 0, in D = {(x, y)  0 < x < 1, 0 < y < y0}. (3.3.4)
u(x, 0; ) = f(x), for 0 x 1, (3.3.5)u(0, y; ) = g1(y), for 0 y y0, (3.3.6)u(1, y; ) = g2(y), for 0 y y0, (3.3.7)
where we have taken y0 > 0, and chosen u0 satisfying2u0x2
= 0 as follows
u0(x, y; ) = g1(y) + (g2(y) g1(y))x.
However, in general, u0(x, 0; ) = f(x), so that u0 is not an approximation ofu in D.
This can be easily understood by noting that (3.3.4) is parabolic while itbecomes elliptic if = 0.
Example 3.3.3 In certain perturbation problems, there exists a uniquelydefined function u0 in D satisfying the limit equation L0u0 = f0 and all theboundary conditions imposed on u, and yet u0 is not an approximation of u:
(x+ )2du
dx+ = 0, for 0 < x < A, (3.3.8)
u(0; ) = 1. (3.3.9)
We have the exact solution to problem (3.3.8) (3.3.9):
u = u(x; ) =
x+ ,

3.3. EXAMPLES 39
and
L0 = x2 d
dx.
The function u0 = 1 satisfies L0u0 = 0 and also the boundary conditions. But
lim0
u(x; ) =
{0 if x = 0,1 if x = 0,
(3.3.10)
and
maxD
u u0 =A
A+ 0
as 0.
Example 3.3.4 Some operators L cannot be decomposed into an unperturbed independent part and a perturbation.
du
dx exp((u 1)/) = 0, for D = {0 < x < A}, (3.3.11)
u(0; ) = 1 . (3.3.12)
Here > 0. Note that
lim0
{maxD
( exp((g 1)/))}
= 0
if and only if g > 1 for x D. Thus we do not have the decomposition withL0 =
ddx. Moreover, from the exact
u(x; ) = 1 + log(x+ exp(/)),
we assert easily that none of the usually successful methods produces anapproximation of u in D.

40 CHAPTER 3. ORDINARY DIFFERENTIAL EQUATIONS

Chapter 4
Partial differential equations
4.1 Regular problem
Let D be an open bounded domain in Rn with smooth boundary D, heren N. Consider
u+ u = f0 + f1, in D, (4.1.1)
uD = 0. (4.1.2)
The associated limit problem is
u = f0, in D, (4.1.3)
uD = 0. (4.1.4)
We shall prove this is a regular problem under suitable assumptions on thedata f0, f1. Let u
, u0 be solutions to problems (4.1.1) (4.1.2) and (4.1.3) (4.1.4), respectively. Define v = u u0, then v satisfies
v = u + f1, (4.1.5)vD = 0. (4.1.6)
Assume that f0, f1 C(D) for some (0, 1). Applying the Schaudertheory for elliptic equations, from equation (4.1.5) we assert that
uC2,(D) C(?uL(D) + f0C(D) + f1C(D)), (4.1.7)
hence, from (4.1.1) we obtain
vC2,(D) C(uC(D) + f1C(D)) C(f0C(D) + f1C(D)). (4.1.8)
41

42 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS
Then one can conclude easily that
vC0(D) 0, i.e. u u0C0(D) 0 (4.1.9)
as 0. Therefore, problem (4.1.1) (4.1.2) is regular.However, as we pointed out in Example 2 in Section 3.3, there are some
problems which are singular though the small parameter does not multiply thehighest order derivative. We shall give another example in next section.
4.2 Conservation laws and vanishing viscosity
method
In this section we will study the inviscid limit of scalar conservation laws withviscosity.
ut + (F (u))x = uxx, (4.2.1)
ut=0 = u0. (4.2.2)
The associated inviscid problem is
ut + (F (u))x = 0, (4.2.3)
ut=0 = u0. (4.2.4)
It is a basic question that does the solution of (4.2.1) (4.2.2) converge tothat of (4.2.3) (4.2.4)? This is the main problem for the method of vanishingviscosity.
In this section we are going to prove that the answer to this question ispositive under some suitable assumptions. We shall make use of the method ofmatched asymptotic expansions and L2energy method. First of all, we makethe following assumptions.
Assumptions. A1)
The function F is smooth. (4.2.5)
A2) Let n N. g0, h0 L1(R) L(R), g0x L1(R) and pTn L1(R) Liploc(R) L(R). g0x, h0 have only a shock at I , respectively. There existsmooth functions g, h, pTn, C0 (R), such that
g, h, pTn, g0, h0, pTn (4.2.6)
in L2(R) as 0, respectively. Moreover, we assume that {pTn} is boundedsequence in BV loc(R) such that as n ,
pTn pT in L1loc(R). (4.2.7)

4.2. CONSERVATION LAWS AND VANISHING VISCOSITY METHOD43
A3) Assume further that Oleiniks onesided Lipschitz condition (OSLC)is satisfied, i.e.
(f(u(x, t)) f(u(y, t))) (x y) (t)(x y)2 (4.2.8)
for almost every x, y R and t (0, T ), where L1(0, T ).
4.2.1 Construction of approximate solutions
4.2.1.1 Outer and inner expansions
First of all, we expand the nonlinearity F (u) as
F () = F (0) + f(0)1 + 2
(f(0)2 +
1
2f(0)(1)
2
)+ , (4.2.9)
f() = f(0) + f(0)1 +
2
(f (0)2 +
1
2f (0)(1)
2
)+ .(4.2.10)
Step 1. To construct outer expansions, we begin with the definition of anansatz
u(x, t) = u0(x, t) + u1(x, t) + 2u2(x, t) + . (4.2.11)
Inserting this into equation (4.2.2) and balancing both sides of the resultingequation yield
0 : (u0)t + (F (u0))x = 0, (4.2.12)
1 : (u1)t + (f(u0)u1)x = R, (4.2.13)
2 : (u2)t + (f(u0)u2)x = R1, (4.2.14)
where R and R1 are functions defined by
R := u0,xx (u0,x
+ ((u0,x)t + (f(u0)u0,x)x))
and
R1 :=(1
2u0,x + u1
)xx
(1
2u0,xx()
2 + u1,x
)t
(f(u0)
(1
2u0,xx()
2 + u1,x
)+
1
2f (u0)(u0,x + u1)
2
)x
(4.2.15)
from which we see that R, R1 depend on constant and functions , u0 andu1.

44 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS
Step 2. Next we construct inner expansions. Introduce a new variable
=x (t)
.
u(x, t) = U0(, t) + U1(, t) + 2U2(, t) + . (4.2.16)
Substituting this ansatz into equation, equating the coefficients of i (withi = 1, 0, 1) on both sides of resultant, we obtain
1 : U 0 + U0 (F (U0)) = 0, (4.2.17)
0 : U 1 + U1 (f(U0)U1) = U 0 + U0t, (4.2.18)
1 : U 2 + U2 (f(U0)U2)
= u1(f (u0)
2U21
)+ U1t. (4.2.19)
Some orthogonality condition should be satisfied.
4.2.1.2 Matching conditions and approximations
Step 3. This step is to find the matching conditions and then to constructapproximations. Suppose that there is a uniform asymptotic expansion, thenthere holds for any fixed t and large
U0(, t) + U1(, t) + 2U2(, t) = u0(x, t) + u1(x, t) +
2u2(x, t) +O(3).
Using the technique in [14] again, we rewrite the right hand side of the aboveequality in terms of , t, and make use of Taylor expansions to obtain
U0(, t) + U1(, t) + 2U2(, t)
= u0(, t) + u1(, t) + 2u2(, t) +O(
3)
= u0(0, t) + u0(0, t) +
1
2u0(0, t)()
2
+(u1(0, t) + u1(0, t)) +
2u2(0, t) +O(3)
= u0(0, t) + (u0(0, t) + u1(0, t))
+2(1
2u0(0, t)
2 + u1(0, t)) + u2(0, t)
)+O(3). (4.2.20)
Hence it follows that
U0(, t) = u0(0, t), (4.2.21)
U1(, t) = u1(0, t) + u0(0, t), (4.2.22)
U2(, t) = u2(0, t) +1
2u0(0, t)
2 + u1(0, t)). (4.2.23)

4.3. CONVERGENCE OF THE APPROXIMATE SOLUTIONS 45
Thus we establish a uniform approximation up to order O(2) by employinga cutoff function. Define
O2(x, t) = u0(x, t) + u1(x, t) + 2u2(x, t), (4.2.24)
I2 (x, t) = U0(x, t) + U1(x, t) + 2U2(x, t). (4.2.25)
Then a uniform asymptotic expansion can be constructed
U 2 (x, t) = (x, t)O2(x, t) + (1 (x, t))I2 (
x (t)
, t). (4.2.26)
4.2.2 Convergence
This subsection is devoted to the investigation of convergence of the approximation obtained in Subsection 4.1. Firstly, we prove that the approximationsatisfies equation (4.2.1) asymptotically. That is
(U 2 )t + (f(U2 ))x (U 2 )xx = O(). (4.2.27)
Secondly, the approximation u converges to the solution u0 to the corresponding limit problem (4.2.3) (4.2.4), as 0. The main result is
Theorem 4.2.1 Suppose that the condition (5.3.75) and assumptions (5.3.7) (5.3.10) are satisfied and that = with being a positive constant.
Then the approximate solutions U 2 , V2 , P
2 satisfy respectively, equations
(5.3.1), (5.3.3) and (5.3.5), in the following sense
(U 2 )t (U 2 )xx + (F (U 2 ))x = O() (4.2.28)
where = 3 1 and (13, 1).
4.3 Convergence of the approximate solutions
This section is devoted to the proof of following Theorem 4.1, which consistsof two parts, one is to prove Theorem 3.1 which asserts that the equations aresatisfied asymptotically, and another is to investigate the convergence rate.
Theorem 4.3.1 Suppose that the assumptions in Theorem 3.1 are met. Letu, p be, respectively, the unique entropy solution with only one shock, the reversible solution to problems (5.1.1) (5.1.2) and (5.1.8) (5.1.9), and let vbe the unique solutions to problem (5.3.62) (5.3.65), such that T
0
{x =(t), t[0,T ]}
6i=1
(ixu(x, t)2 + ixv(x, t)2 + ixp(x, t)2
)dxdt C.(4.3.1)

46 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS
Then the solutions (u , v) of problems (5.3.1) (5.3.2) and (5.3.3) (5.3.4)converge, respectively, to (u, v) in L(0, T ;L2(R))L(0, T ;L2(R)), and thefollowing estimates hold
sup0tT
u(t) u(t)+ sup0tT
v(t) v(t) C. (4.3.2)
Here is a constant defined by
= min
{3
2,
1 +
2
}, where is the same as in Theorem 3.1, (4.3.3)
Remark 4.1. Combination of (5.4.4) and a stability theorem of the reversiblesolutions (see Theorem 4.1.10 in Ref. [6])) yields that
sup0tT
p,n pL(h[R,R]) 0, (4.3.4)
as 0, then n , where R is any positive constant. This ensures us toselect alternating descent directions.
4.3.1 The equations are satisfied asymptotically
In this subsection, we are going to prove that the approximate solutionsU 2 , V
2 , P
2 satisfy asymptotically, the corresponding equations (5.3.1), (5.3.3)
and (5.3.5) respectively. For simplicity, in this subsection we omit the superscript and write the approximate solutions as U2, V2, P2.
Proof of Theorem 3.1. We divide the proof into three parts.
Part 1. We first investigate the convergence of U2. Straightforward computations yield
(U2)t =dt
(U2 U2
)+ (U2)t + (1 )(U2)t, (4.3.5)
(U2)x =1
(U2 U2
)+ (U2)x + (1 )(U2)x, (4.3.6)
(U2)xx =1
2
(U2 U2
)+
2
(U2 U2
)x
+ (U2)xx + (1 )(U2)xx. (4.3.7)
Hereafter, to write the derivatives of U2 in the form, like the first term in theright hand side of (5.4.7), we changed the arguments t, x of U2, V2, P2 to t, d,where d = d(x, t) is defined by
d(x, t) = x (t).

4.3. CONVERGENCE OF THE APPROXIMATE SOLUTIONS 47
However, risking abuse of notations we still denote U2(d, t), V2(d, t), P2(d, t) byU2, V2, P2 for the sake of simplicity. After such a transformation of arguments,the terms are easier to deal with, as we shall see later on.
Therefore, we find that U2 satisfies
(U2)t (U2)xx + (F (U2))x = I1 + I2 + I3, (4.3.8)
where Ik (k = 1, 2, 3) are the collections of liketerms according to whether ornot their supports are contained in a same subdomain of R, more precisely,they are defined by
I1 =
((U2)t (U2)xx + f(U2) (U2)x
), (4.3.9)
I2 = (1 )((U2)t (U2)xx + f(U2)(U2)x
), (4.3.10)
I3 = (U2 U2)
(dt
21+f(U2)
) 2
1
(U2 U2
)x.(4.3.11)
It is easy to see that the support of I1, I2 is, respectively, a subset of[0, 2] and a subset of [,), while the support of I3 is a subset of [, 2][2,].
Now we turn to estimate I1, I2, I3. Firstly we handle I3. In this case onecan apply the matching conditions (5.3.72) (5.3.74) and use Taylor expansions to obtain
lx(U2 U2)(x, t) = O(1)(3l) (4.3.12)
on the domain {(x, t)  x (t) 2 , 0 t T} and l = 0, 1, 2, 3.From these estimates (5.4.14), which can also be found, e.g. in Goodman andXin [21], the following assertion follows easily that
I3 = O(1)2 as 0. (4.3.13)
Moreover, we haveRI3(x, t)2dx =
{x(t)2 }
I3(x, t)2dx C5. (4.3.14)
To deal with I1, I2 we rearrange the terms of I1, I2 as follows
I1 =
((U2)t (U2)xx + (F (U2))x
)+
(f(U2) f(U2)
)(U2)x,
= I1a + I1b, (4.3.15)
I2 = (1 )((U2)t (U2)xx + (F (U2))x
)+ (1 )
(f(U2) f(U2)
)(U2)x,
= I2a + I2b. (4.3.16)

48 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS
Moreover, I1b can be rewritten as
I1b =
10
f (sU2 + (1 s)U2)ds (U2 U2)(U2)x. (4.3.17)
Note that supp I1b {(x, t) QT  x (t) 2 } and U2(x, t) = U2(x, t)if {(x, t) QT  x (t) }. Therefore, from (5.4.14) and (5.4.19) weobtain
I1b =C
(U2 U2)(U2) = O(31), (4.3.18)
where we choose > 13so that 3 1 > 0. Recalling the construction of U2,
from equations (5.3.69) (5.3.71), we can rewrite I1a as
I1a =
((U2)t (U2)xx + (F (u0) + f(u0)u1 + 2(f(u0)u2 +
1
2f (u0)u
21) +Ru)x
)= (Ru)x, (4.3.19)
where the remainder Ru is defined by
Ru = F (U2)(F (u0) + f(u0)u1 +
2(f(u0)u2 +1
2f (u0)u
21)
)= O(3).
Thus
I1a (Ru)x =1
(Ru) = O(2). (4.3.20)
In a similar manner, we now handle I2, and rewrite I2b as
I2b = (1 ) 10
f (sU2 + (1 s)U2)ds (U2 U2)(U2)x. (4.3.21)
It is easy to see that supp I2b {d } and U2 = U2 if d 2. From thefact that U2 U2 = (U2 U2) and (5.4.14) it follows that
I2b C (U2 U2)(U2)x = O(3). (4.3.22)
As for I2a, invoking equations (5.3.29) and (5.3.30) we assert that there holds
I2a = (1 )((U2)t (U2)xx + (F (u0) + f(u0)u1 +Ru)x
)= (1 )(Ru)x +O(2). (4.3.23)
and the remainder Ru is given by
Ru = F (U2) (F (u0) + f(u0)u1) = O(2),

4.3. CONVERGENCE OF THE APPROXIMATE SOLUTIONS 49
hence
I2a = O(2). (4.3.24)
On the other hand, we haveRI1(x, t)2dx =
{x(t)2 }
I1(x, t)2dx
{x(t)}
I1a(x, t)2dx+{x(t)2 }
I1b(x, t)2dx
C(22+1 + 62+) C . (4.3.25)
Here we used the simple inequalities: 6 2 + < 5 and 6 2 > 0 since weassume (1
3, 1).
Similarly, one can obtainRI2(x, t)2dx =
{x(t)}
I2(x, t)2dx C(22+1 + 6+) C .(4.3.26)
In conclusion, from (5.4.10), (5.4.15), (5.4.20), (5.4.22), (5.4.24) and (5.4.26)we are in a position to assert that U 2 satisfies the equation in the followingsense
(U 2 )t (U 2 )xx + (F (U 2 ))x = O(),
as 0. Here = 3 1 and we used the fact that 3 1 < 2 < 2by assumption < 1. Furthermore, from construction we see easily that theinitial data is satisfied asymptotically too.
Part 2. We now turn to investigate the convergence of V2. Similar computations show that V2 can be written in terms of V2, V2 as
(V2)t =dt
(V2 V2
)+ (V2)t + (1 )(V2)t, (4.3.27)
(V2)x =1
(V2 V2
)+ (V2)x + (1 )(V2)x, (4.3.28)
(V2)xx =1
2
(V2 V2
)+
2
(V2 V2
)x
+ (V2)xx + (1 )(V2)xx, (4.3.29)
and V2 satisfies the following equation
(V2)t (V2)xx + (f(U2)V2)x = J1 + J2 + J3, (4.3.30)

50 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS
where Jk (k = 1, 2, 3) are given, according to their supports, by
J1 =
((V2)t (V2)xx +
(f(U2)V2
)x
), (4.3.31)
J2 = (1 )((V2)t (V2)xx +
(f(U2)V2
)x
), (4.3.32)
J3 =
(dt
21+f(U2)
)(V2 V2)
21
(V2 V2)x.(4.3.33)
Since for V2, we also have the same estimate (5.4.14) which is valid for U2,namely we have
lx(V2 V2) = O(1)(3l) (4.3.34)
on the domain {(x, t)  x (t) 2 , 0 t T} and l = 0, 1, 2, 3.Thus it follows from (5.4.36) and the uniform boundedness of U2 that
J3 = O(1)2, as 0. (4.3.35)
The investigation of convergence for J1, J2 is more technically complicatedthan I1, I2. Rewrite J1, J2 as
J1 =
((V2)t (V2)xx + (f(U2)V2)x
)+
((f(U2) f(U2)) V2
)x
= J1a + J1b, (4.3.36)
and
J2 = (1 )((V2)t (V2)xx + (f(U2)V2)x
)+ (1 )
((f(U2) f(U2)V2
)x
= J2a + J2b. (4.3.37)
We now deal with J1b which can be changed to
J1b =
( 10
f (sU2 + (1 s)U2)ds(U2 U2) V2)
x
=
( 10
f (sU2) + (1 s)U2)ds V2)
x
(U2 U2)
+
10
f (sU2) + (1 s)U2)ds V2(U2 U2
)x
= O(31) +O(2) = O(31), (4.3.38)
here we used that 3 1 < 2 since < 1.

4.3. CONVERGENCE OF THE APPROXIMATE SOLUTIONS 51
Rewriting J1a as
J1a =
((V2)t (V2)xx +
((f(u0) + f
(u0)(u1 + 2u2) +
1
2f (u0)(u1)
2)V2
)x
)+(RvV2)x
=
(O(2) + (RvV2)x
). (4.3.39)
Here, equations (5.3.76)  (5.3.78) were used. And the remainder Rv is
Rv = f(U2) (f(u0) + f (u0)(u1 + 2u2) +1
2f (u0)(u1)
2) = O(3)
Therefore, from (5.4.41) one has
J1a =
(O(2) + (RvV2)x
)= O(2). (4.3.40)
The terms J2a, J2b can be estimated in a similar way and we obtain
J2b = O(3) +O(2) = O(2), (4.3.41)
and
J2a = (1 )((V2)t (V2)xx + ((f(u0) + f (u0)u1 +Rv)V2)x
)= (1 )
(O(2) + (RvV2)x
),
where Rv is given by Rv = f(U2) (f(u0) + f (u0)u1) = O(2). It is easy tosee that
J2a = O(2). (4.3.42)
On the other hand, we have the following estimates of integral typeRJ1(x, t)2dx =
{x(t)2 }
J1(x, t)2dx
{x(t)}
J1a(x, t)2dx+{x(t)2 }
J1b(x, t)2dx
C(22+1 + 62+) C . (4.3.43)
andRJ2(x, t)2dx =
{x(t)}
J2(x, t)2dx C(22+1 + 4+) C .(4.3.44)

52 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS
Therefore, it follows from (5.4.32), (5.4.37), (5.4.40), (5.4.42), (5.4.43) and(5.4.44) that V 2 satisfies the equation in the following sense
(V 2 )t (V 2 )xx + (f(U 2 )V 2 )x = O(31),
as 0. By construction, the initial data is satisfied asymptotically as well.
Part 3. Finally we turn to investigate the convergence of P2. Computationsshow that the derivatives of P2 can be written in terms of P2, P2 as
(P2)t =dt
(P2 P2
)+ (P2)t + (1 )(P2)t, (4.3.45)
(P2)x =1
(P2 P2
)+ (P2)x + (1 )(P2)x, (4.3.46)
(P2)xx =1
2
(P2 P2
)+
2
(P2 P2
)x
+ (P2)xx + (1 )(P2)xx, (4.3.47)
and P2 satisfies the following equation
(P2)t (P2)xx f(U2)(P2)x = K1 +K2 +K3, (4.3.48)
where Ki (i = 1, 2, 3) are given, according to their supports, by
K1 = ((P2)t + (P2)xx + f(U2)(P2)x
), (4.3.49)
K2 = (1 )((P2)t + (P2)xx + f(U2) (P2)x
), (4.3.50)
K3 =
(dt
+
21
+f(U2)
)(P2 P2)
21
(P2 P2)x.(4.3.51)
By similar arguments as done for U 2 , we can prove that
(P 2 )t (P 2 )xx f(U 2 )(P 2 )x = O(31),
as 0.
4.3.2 Proof of the convergence
This subsection is devoted to the proof of Theorem 4.1. Since this subsectionis concerned with the proof of convergence as 0, we denote U 2 , V 2 , P 2by U , V , P , respectively, for the sake of simplicity. We begin with thefollowing lemma.

4.3. CONVERGENCE OF THE APPROXIMATE SOLUTIONS 53
Lemma 4.3.2 For defined in (5.4.5),
sup0tT
u(, t) u(, t)+ sup0tT
v(, t) v(, t) C, (4.3.52)
sup0tT
p,n(, t) p(, t)L1loc(R) 0, (4.3.53)
as 0, then n . Here p,n denotes the solution to smoothed adjointproblem (5.3.5) (5.3.6).
Proof. Firstly, by construction of the approximate solutions, we conclude thati) for (x, t) {x (t) 2, 0 t T},
U2 (x, t) = u(x, t) +O(1),
where O(1) denotes a function which is squareintegrable over outer region bythe argument in subsection 4.1.ii) for (x, t) {x (t) },
U 2 (x, t) = u0(x, t) +O(1),
iii) and for (x, t) { x (t) 2},
U 2 (x, t) = U2 (x, t) + (x, t)(U
2 (x, t) U2 (x, t)) U 2 (x, t) +O(1)3.
Here we have used again the estimate U 2 (x, t) U 2 (x, t) = O(1)3. We canalso establish similar estimates for V 2 , P
2 . Therefore, one can obtain
sup0tT
u(, t) U (, t)2 + sup0tT
v(, t) V (, t)2 Cmin{3,2},(4.3.54)
sup0tT
pn(, t) P (, t)2 Cmin{3,2},(4.3.55)
where pn is the reversible solution to the inviscid adjoint equation tp f(u)xp = 0 with final data p(x, T ) = p
Tn (x).
By Theorem 4.1.10 which is concerned with the stability (with respect tocoefficient and final data) of reversible solutions in [6] (see also Theorem 5.1in the appendix of this chapter), and the assumptions of final data pn and theonesided Lipschitz condition we conclude that the reversible solution pn of thebackward problem with initial data pn satisfies
pn p in C([0, T ] [R,R])
for any R > 0, here p is the reversible solution to pt f(u)xp = 0, inR (0, T ) with finial data p(x, T ) = pT (x) for x R. Therefore, we have
sup0tT
pn pL1loc(R) 0, (4.3.56)

54 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS
as n .Secondly, we need to estimate sup0tT u(, t) U (, t)2, etc. Suppose
that we have obtained
sup0tT
u(, t) U (, t)2 C1 (4.3.57)
where 1 = + 1. Then we arrive at (5.4.54) by using the triangle inequality,the estimate (5.4.56) and the fact that min
{32, 1+
2, 1}= min
{32, 1+
2
}(since
13< < 1)). So we conclude for v, p.
Part 1. Now we prove (5.4.59). To this end, we define
w(x, t) = u(x, t) U (x, t),
Then w(x, t) satisfies
wt wxx + f(u)wx = F , (4.3.58)w(x, 0) = w0(x). (4.3.59)
Here
F := 3
i=1
Ii Q(x, t) f (U)wU x (4.3.60)
Q :=(f(u) f(U ) f (U )w
)U x . (4.3.61)
Rescaling as follows
w(x, t) = w(y, ), where y =x (t)
, and =
t
,
one has
wt(x, t) = w (y, )()wy, wx(x, t) = wy(y, ), wxx(x, t) =1
wyy(y, ),
and problem (5.4.60) (5.4.61) turns out to be
w wyy + (f(u) ())wy = F(y + , ), (4.3.62)w(x, 0) = w0(x). (4.3.63)
Here the initial data w0 can be chosen very small such that
w02H1(R) C C,
provided that is suitably small.The existence of solution w C([0, T/];H1(R)) to problem (5.4.64)
(5.4.65) follows from the method of continuation of local solution, which is based on local existence of solution and a priori estimates stated in the followingproposition

4.3. CONVERGENCE OF THE APPROXIMATE SOLUTIONS 55
Proposition 4.3.3 (A priori estimates) Suppose that problem (5.4.64) (5.4.65)has a solution w C([0, 0];H1(R)) for some 0 (0, T/]. There exists positive constants 1, 1 and C, which are independent of , 0, such that if
(0, 1], sup00
w(, )H1(R) + 0 1, (4.3.64)
then
sup00
w(, )2H1(R) + 00
w(, )2H2(R)d C.
Proof. Step 1. By the maximum principle and construction of approximatesolution U , we have
uL(Q0 ) C, UL(Q0 ) C. (4.3.65)
Whence, from the smallness assumption (5.4.66) and definition of Q it followsthat
Q(x, t) Cw2U x . (4.3.66)
Step 2. Multiplying eq. (5.4.64) by w and integrating the resulting equation with respect to y over R we obtain
1
2
d
dw2 + wy2 +
R(f(u) ())wywdy =
RF(y + , )wdy.(4.3.67)
We first deal with the termRf (U )wU xwdy =
Rf (U )w2U xdy.
From the property of profile U we have
U x = Uy 0, as 0.
Thus Rf (U)wU xwdy
Cw2.For the term of Q, it is easier to obtain that
RQ(y, )U xwdy
C Rw2Uxwdy Cw2.

56 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS
It remains to deal with the termR Iwdy where I =
3i=1 Ii. We invoke the
L2norm estimates for I, i.e. (5.4.27) and (5.4.28), which have been obtainedin Subsection 3.1, and get
0
RIwddy
C ( 0
RI2ddy +
0
Rw2ddy
) C + C
0
Rw2ddy. (4.3.68)
Finally, by the Young inequality one getsR(f(u) ())wywdy
12wy2 + Cw2.Therefore, the above arguments and integrating (5.4.69) with respect to
yield
1
2w()2 + 1
2
0
wy(s)2ds C 0
w(s)2ds+ C, (4.3.69)
from which and the Gronwall inequality in the integral form we then arrive at
w()2 C.
So we also obtain
u(, t) U (, t)2 = w()2 C1+.
Step 3. Next we multiply eq. (5.4.65) by wyy and integrate the resultantwith respect to y to get
1
2
d
dwy2 + wyy2
R(f(u) ())wyywydy
= RF(y + , )wyydy, (4.3.70)
Using (5.4.67) and estimates on Ii (I = 1, 2, 3), from the Young inequality wethen arrive at
1
2wy()2 +
1
2
0
wyy(s)2ds C 0
wy(s)2ds+ C, (4.3.71)
making use of Gronwalls inequality again one has
wy()2 C. (4.3.72)

Chapter 5
An application to optimalcontrol theory
5.1 Introduction
Optimal control for hyperbolic conservation laws requires a considerable analytical effort and computationally expansive in practice, is thus a difficult topic.Some methods have been developed in the last years to reduce the computational cost and to render this type of problems affordable. In particular, recentlythe authors of [11] have developed an alternating descent method that takesinto account the possible shock discontinuities, for the optimal control of theinviscid Burgers equation in one space dimension. Further in [12] the vanishingviscosity method is employed to study this alternating descent method for theBurgers equation, with the aid of the HopfCole formula which can be foundin [24, 37], for instance. Most results in [12] are formal.
In the present chapter we will revisit this alternating descent method inthe context of one dimensional viscous scalar conservation laws with a generalnonlinearity. The vanishing viscosity method and the method of matchedasymptotic expansions will be applied to study this optimal control problemand justify rigorously the results.
To be more precise, we state the optimal problem as follows. For a givenT > 0, we study the following inviscid problem
ut + (F (u))x = 0, in R (0, T ); (5.1.1)u(x, 0) = uI(x), x R. (5.1.2)
Here, F : R R, u 7 F (u) is a smooth function, and f denotes its derivativein the following context. The case that F (u) = u
2
2is studied in e.g. [11, 12].
Given a target uD L2(R) we consider the cost functional to be minimized
57

58CHAPTER 5. AN APPLICATION TO OPTIMAL CONTROL THEORY
J : L1(R) R, defined by
J(uI) =
R
u(x, T ) uD(x)2 dx,where u(x, t) is the unique entropy solution to problem (5.1.1) (5.1.2).
We also introduce the set of admissible initial data Uad L1(R), that weshall define later in order to guarantee the existence of the following optimization problem:
Find uI,min Uad such that
J(uI,min) = minuIUad
J(uI).
This is one of the model optimization problems that is often addressedin the context of optimal aerodynamic design, the socalled inverse designproblem, see e.g. [19].
The existence of minimizers has been proved in [11]. From a practicalpoint of view it is however more important to be able to develop efficientalgorithms for computing accurate approximations of discrete minimizers. Themost efficient methods to approximate minimizers are the gradient methods.
But for large complex systems, as Euler equations in higher dimensions,the existing most efficient numerical schemes (upwind, Godunov, etc.) are notdifferentiable. In this case, the gradient of the functional is not well definedand there is not a natural and systematic way to compute its variations. Dueto this difficulty, it would be natural to explore the possible use of nonsmoothoptimization techniques. The following two approaches have been developed:The first one is based on automatic differentiation, and the second one isthe socalled continuous method consisting of two steps as follows: One firstlinearizes the continuous system (5.1.1) to obtain a descent direction of thecontinuous functional J , then takes a numerical approximation of this descentdirection with the discrete values provided by the numerical scheme. Howeverthis continuous method has to face another major drawback when solutionsdevelop shock discontinuities, as it is the case in the context of the hyperbolicconservation laws like (5.1.1) we are considering here.
The formal differentiation of the continuous states equation (5.1.1) yields
t(u) + x(f(u)u) = 0, in R (0, T ); (5.1.3)
But this is only justified when the state u on which the variations are beingcomputed, is smooth enough. In particular, it is not justified when the solutions are discontinuous since singular terms may appear on the linearizationover the shock location. Accordingly in optimal control applications we also

5.1. INTRODUCTION 59
need to take into account the sensitivity for the shock location (which hasbeen studied by many authors, see, e.g. [9, 20, 33]). Roughly speaking, themain conclusion of that analysis is that the classical linearized system for thevariation of the solutions must be complemented with some new equations forthe sensitivity of the shock position.
To overcome this difficulty, we naturally think of another way, namely, thevanishing viscosity method (as in [12], in which an optimal control problem forthe Burgers equation is studied) and add an artificial viscosity term to smooththe state equation. Equations (5.1.1) with smoothed initial datum then turnsout to be
ut + (F (u))x = uxx, in R (0, T ), (5.1.4)
ut=0 = g. (5.1.5)
Note that the Cauchy problem (5.1.4) (5.1.5) is of parabolic type, thus fromthe standard theory of parabolic equations (see, for instance, Ladyzenskayaet al [28]) we have that the solution u, of this problem is smooth. So thelinearized one of eq. (5.1.4) can be derived easily, which reads
(u)t + (f(u)u)x = (u)xx, in R (0, T ), (5.1.6)
ut=0 = h. (5.1.7)
Here , are positive constants, u denotes the variation of u. The initialdata g, h will be chosen suitably in Section 3, so that the perturbations ofinitial data and shock position are taken into account, this renders us that wecan select the alternating descent directions in the case of viscous conservationlaws.
To solve the optimal control problem, we also need the following adjointproblem, which reads
pt f(u)px = 0, in R (0, T ); (5.1.8)p(x, T ) = pT (x), x R, (5.1.9)
here, pT (x) = u(x, T ) uD(x). And we smooth equation (5.1.8) and initialdata as follows
pt f(u)px = pxx, in R (0, T ); (5.1.10)p(x, T ) = pT (x), x R, (5.1.11)
Since solutions u = u(x, t; , ), u = u(x, t; , ) are smooth, shocks vanish, instead quasishock regions are formed. Natural questions are arising as

60CHAPTER 5. AN APPLICATION TO OPTIMAL CONTROL THEORY
follows: 1) How should , go to zero, more precisely, can , go to zero independently? Which one goes to zero faster, or the same? What happen if thetwo parameters , 0? 2) What are the limit of equations (5.1.10), (5.1.6)and (5.1.4) respectively? 3) To solve the optimal control problem correctly,the states of system (5.1.3) should be understood as a pair (u, ), where is the variation of shock position. As , 0, is there an equation for which determines the evolution of and complements equation (5.1.3)?
To answer these questions, we shall make use of the method of matchedasymptotic expansions. Our main results are: the parameters , must satisfy
= ,
where is a given positive constant. This means that , must go to zero atthe same order, but speeds may be different. We write
= . Then we see
that if > 1, goes to zero faster than , and vice versa.We now fix which is assumed to be very small. As , namely
0, the equation of variation of shock position differs from the one derivedby Bressan and Marson [9], etc., by a term which converges to zero as tendsto infinity, however may be very large if is small enough. Thus we concludethat
1) The equation derived by Bressan and Marson is suitable for developingthe numerical scheme when is sufficiently large. In this case, the perturbationof initial data plays a dominant role and the effect due to the artificial viscositycan be omitted;
2) However, if is small, then the effect of viscosity must be taken intoaccount while the perturbation of initial data can be neglected, and a correctorshould be added.
We shall prove that the solutions to problem (5.1.4) (5.1.5), and problem(5.1.8) and (5.1.11) converge, respectively, to the entropy solution and thereversible solution of the corresponding inviscid problems, while the solutionto problem (5.1.6) (5.1.7) converges to the one that solves (5.1.3) in thesubregions away from the shock, and is complemented by an equation whichgoverns the evolution of the variation of shock position.
Furthermore, using the method of asymptotic expansions we also clarifysome formal expansions used frequently in the theory of optimal control, thatthey are valid only away from shock and when some parameter is not too small.For example, for solution u to problem (5.1.4) (5.1.5) we normally expandit as
u = u+ u +O(2), (5.1.12)
where u is usually believed to be the entropy solution to problem (5.1.1) (5.1.2), and u is the variation of u , the solution to (5.1.6) (5.1.7). However (5.1.12) is not correct near shock provided that u(x, 0) is bounded,

5.1. INTRODUCTION 61
for instance. From this assumption, we have u is continuous and uniformlybounded by the theory of parabolic equations, moreover u is continuous too,thus it follows from (5.1.12) that u should be continuous too, but u is normallydiscontinuous. Therefore, we should understand (5.1.12) as a multiscale expansion, and assume that u = u(x, x/, t), u = u(x, x/, t). We shall obtainsuch an expansion by the method of matched asymptotic expansions.
The new features to the method of asymptotic expansions in this chapeterare mainly as follows: Firstly, our expansions for u, and u, are differentfrom the standard ones due to the fact that equations (5.1.6) and (5.1.4) are notindependent, (5.1.6) is the variation of (5.1.4), so when constructing asymptotic expansions we should take this fact into account and find some compatibleconditions for the asymptotic expansions of u, and u,. Secondly, we derivethe equation of variation of shock location from the outer expansions, but notfrom the inner expansions as usual, see, e.g. [14]. Our approach is based upona key observation that outer expansions converge to their values at the shockand the quasishock region vanishes as 0.
We need to introduce some
Notations: For any t > 0, we define Qt = R (0, t). C(t), Ca, denote,respectively, constants depending on t, a, , and C is a universal constant inSubsection 4.2.
For a function f = f(r, t) where r = r(x, t; ): ft denotes the partialderivative with respect to t while (f)t = ft + frrt, and so on.
Let X be a Banach space endowed with a norm X , and f : [0, T ] X.For any fixed t the Xnorm of f is denoted by f(t)X , when X = L2(R), wewrite f(t) = f(t)X , sometimes the argument t is omitted.
Landau symbolsO(1) and o(1). A quantity f(x, t; ) =