5310_Ch1_fall00

download 5310_Ch1_fall00

of 40

Transcript of 5310_Ch1_fall00

  • 8/3/2019 5310_Ch1_fall00

    1/40

    1 Method for Ordinary Differential EquationsThis chapter will introduce the reader to the terminology and notation of differential equations.

    Students will also be reminded of some of the elementary solution methods they are assumed to

    have encountered in an undergraduate course on the subject. At the conclusion of this review one

    should have an idea of what it means to solve a differential equation and some confidence that

    they could construct a solution to some simple and special types of differential equations.

    1.1 Definitions and Notation

    Differential equations are divided into two classes, ordinary and partial. An ordinary differential

    equation (ODE) involves one independent variable and derivatives with respect to that variable. Apartial differential equation (PDE) involves more than one independent variable and corresponding

    partial derivatives. The first semester of this course is about ordinary differential equations. The

    phrase differential equation will usually mean ordinary differential equation unless the context

    makes it clear that a partial differential equation is intended. The order of the highest derivative in

    the differential equation is the order of the equation.

    Example (1.1.1)

    (a) x2d3y

    dx3 6x dy

    dx+ 10y = 0 3rd order ODE

    (b) x(t) + sin x(t) = 0 2nd order ODE

    (c) y + y = 0 2nd order ODE

    (d) x + x = 0 1st order ODE

    (e)u

    x u

    y= 0 1st order PDE

    (f)2u

    t2=

    2u

    x2+

    2u

    y22nd order PDE

    From the equation it is usually clear which is the independent variable and which is dependent.

    For example, in (a) it is implied that y = y(x) while in (b) the dependence ofx upon t is explicitlynoted. In (c) and (d) the choice of the independent variable is arbitrary but as a matter of choice

    we will usually regard y = y(x) and x = x(t). The latter notation is usually reserved for thediscussion of dynamical systems in which case one should think of x(t) as denoting the state ofa system at time t. In contrast the notation y = y(x) might be used in the context of a boundaryvalue problem when x has the interpretation of a spatial coordinate. Of course this is much ado

    1

  • 8/3/2019 5310_Ch1_fall00

    2/40

    about nothing and the student should be prepared to see differential equations written in a variety

    of ways.

    For the partial differential equation in (e) u = u(x, y) while in (f), u = u(x,y,t). We willoften denote partial derivatives by subscripts in which case (e) could be written ux uy = 0 andsimilarly for (f) utt = uxx + uyy .

    Differential equations are divided into two other classes, linear and nonlinear. An nth orderlinear differential equation can be put into the form

    a0y(n) + a1y

    (n1) + + an1y + any = bwhere b and the coefficients ak depend on at most the independent variable. If b = 0 the equationis said to be homogeneous, otherwise it is called nohomogeneous.

    Example (1.1.2)

    y = y + 1 linear, nonhomogeneous

    x sin t = x linear, homogeneous

    (y)2 = x + y nonlinear

    General differential equations of order 1 and 2 may be written as

    F(x,y,y) = 0 (1.1.1)

    F(x,y,y, y) = 0

    with the obvious notation for higher order equations. Such equations are often assumed to be

    solvable for the highest derivative and then written as

    y = f(x, y) (1.1.2)

    y = f(x,y,y).

    A differentiable function y = (x) is a solution of (1.1.1) on an interval JifF(x, (x), (x)) =0, x J. Usually a first order equation has a family of solutions y = (x, c) depending ona single parameter c. A second order equation usually has a two-parameter family of solutionsy = (x, c1, c2) and so on. These parameters are like constants of integration and may be deter-mined by specifying initial conditions corresponding to some time when a process starts.

    Example (1.1.3) A two-parameter family of solutions ofx = 6 is

    x = 3t2 + c1t + c2 (1.1.3)

    The solution that satisfies the initial conditions

    x(0) = 1, x(0) = 1is

    x = 3t2 t + 1.

    2

  • 8/3/2019 5310_Ch1_fall00

    3/40

    Because solutions may involve one or more integrations, the graph of a solution is called an

    integral curve. The integral curves in the previous example are parabolas. A family of functions is

    a complete solution of a differential equation if every member of the family satis fies the differential

    equation and every solution of the differential equation is a member of the family. The family of

    functions given by (1.1.3) is the complete solution of y = 6.

    For the student who is familiar with the terminology of a general solution we should state atthis point that the notion of a general solution will only be used in the context of linear equations.

    The appropriate definitions will be introduced in Chapter 3 at which time the distinction between

    a general and complete solution will be made clear.

    Example (1.1.4) The family

    y = (x, c) =1

    2x + c

    solves y + y3 = 0. The family is not complete since the trivial solution, y(x) = 0 (also denoted

    y 0), cannot be obtained for any value ofc.

    In equation (1.1.2) the left hand member dy/dx denotes the slope of a solution curve whereasthe right hand member f(x, y) gives the value of the slope at (x, y). From this point of viewthe solution of an ordinary differential equations is a curve that flows along directions specified by

    f(x, y). The use ofdy/dx breaks down when the tangent is vertical but the geometric interpretationremains meaningful. To deal with this matter note that a smooth curve in the (x, y) plane may bedescribed locally by y = (x) or x = (y). Ifdy/dx is meaningless, dx/dy may be permissible.Consequently, it may be advantageous to treat x or y as independent or dependent variables informulating a differential equation. These remarks motivate the following discussion.

    A differential equation is said to be in differential form if it is written as

    M(x, y)dx + N(x, y)dy = 0. (1.1.4)

    A differentiable function y = (x) is a solution of (1.1.4) on an interval if the substitutions

    y = (x), dy = (x)dx

    make (1.1.4) true. A differentiable function x = (y) is a solution if the substitutions

    x = (y), dx = (y)dy

    make (1.1.4) true. An equation

    G(x, y) = c, c constant (1.1.5)

    is said to furnish an implicit solution of (1.1.2) if (1.1.5) can be solved for y = (x) or x = (y)in a neighborhood of a point (x, y) satisfying (1.1.5) and or is a solution in the sense definedabove. When (1.1.5) furnishes an implicit solution the graph of this equation is also called an

    integral curve.

    3

  • 8/3/2019 5310_Ch1_fall00

    4/40

    There is some ambiguity as to what one is expected to produce when asked to solve a dif-

    ferential equation. Often the differential equation is regarded as solved when one has derived an

    equation such as (1.1.5). However, the context of the problem may make it clear that one needs to

    produce an explicit formula for the solution, say y = (x) or x = (y). The next two exampleswill expound upon this issue, but in general the specific problem at hand will usually dictate as to

    what is meant by the phrase, solution.Example (1.1.5) Consider the equation in differential form

    xdx + ydy = 0.

    This equation is easily solved by writing it as a pure derivative, i.e.,

    1

    2d(x2 + y2) = 0.

    When written in this form it is apparent that possible implicit solutions are

    x2 + y2 = C. (1.1.6)

    If C < 0, the locus is empty. IfC = 0 the locus consists of the point (0,0). This case does notcorrespond to a solution since it does not describe a differentiable curve of the form y = (x) orx = (y). IfC > 0, the integral curves are circles centered at the origin.

    Example (1.1.6) Consider the three equations

    (a)dy

    dx= x

    y(b)

    dx

    dy= y

    x(c) xdx + ydy = 0.

    (a) This form of the equation implicitly requires a solution y = (x) and no integral curve cancontain a point (x, y) where y = 0.

    (b) This requires a solution y = (x) and no integral curve can contain a point (x, y) wherex = 0.

    (c) This equation allows for solutions of the form x = (y), or y = (x) and leads to thefamily of circles (1.1.6) obtained in the previous example. In particular, in the neighborhood of

    any point (x, y) on the circle we can express y as a differentiable function ofx or vice versa. Onecould obtain from (1.1.6) additional functions y = (x) by taking y > 0 on part of the circle andy < 0 on the remainder (as in (c) below). Such a function is not differentiable and hence not asolution.

    (a) (b) (c)y=(x) x=(y) y=(x)

    4

  • 8/3/2019 5310_Ch1_fall00

    5/40

    Fig. 1.1.1. Solutions defined by an integral curve are displayed in (a) and (b). Though the graph of

    (x) lies on an integral curve, it is not a solution of the differential equation.

    Having introduced the notion of a solution and integral curve, we now will review some of the

    most elementary methods for solving ordinary differential equations.

    1.2 Examples of Explicit Solution Techniques

    (a) Separable Equations.

    A differential equation is separable if it can be wrtitten in the form

    F(x,y,y) =dy

    dx f(x)/g(y) = 0.

    The differential equation is solved by separating the variables and performing the integra-

    tions g(y)dy = f(x)dx.Example (1.2.1) Solve

    dy/dx = y2/x, x = 0.We separate the variables and integrate to obtain

    1

    y2dy =

    1

    xdx

    or

    y = 1ln |x| + C (1.2.1)

    In the above calculation division by y2 assumed y = 0. The function y = 0 is also a solution,referred to as a singular solution. Therefore, the family given in (1.2.1) is not complete.

    (b) Homogeneous Equations.

    A differential equation is homogeneous if it can put in the form

    F(x,y,y) = y f(y/x) = 0.

    In this case let z = y/x sody

    dx= z +

    dz

    dx= f(z)

    ordz

    dx=

    f(z) zx

    ,

    which is separable.

    5

  • 8/3/2019 5310_Ch1_fall00

    6/40

    (c) Exact Equations.

    The differential equation F(x,y,y) = 0 is exact if it can be written in differential form

    M(x, y)dx + N(x, y)dy = 0

    where My

    =Nx

    .

    Recall that if M, N are C1 on a simply connected domain then there exists a function Psuch that

    P

    x= M,

    P

    y= N.

    Thus the exact differential equation may be written as a pure derivative

    dP = Mdx + Ndy = 0

    and hence implicit solutions may be obtained from

    P(x, y) = C.

    Example (1.2.2) Solve

    (3x2 + y3ey)dx + (3xy2ey + xy3ey + 3y2)dy = 0

    Here

    My = 3y2ey + y3ey = Nx.

    Thus there exists P such thatP

    x= M,

    P

    y= N.

    It is simplest to obtain P from

    P =

    Pxdx = x

    3 + xy3ey + h(y).

    Differentiate with respect to y to obtain

    P

    y

    = 3xy2ey + xy3ey + h(y) = N

    and so h(y) = 3y2 or h(y) = y3. Thus

    P(x, y) = x3 + y3eyx + y3

    and so

    x3 + y3eyx + y3 = C

    provides an implicit solution.

    6

  • 8/3/2019 5310_Ch1_fall00

    7/40

    (d) Integrating Factors.

    Given

    M(x, y)dx + N(x, y)dy = 0,

    = (x, y) is called an integrating factor if

    y

    (M) = x

    (N)

    The point of this definition is that if is an integrating factor, Mdx + Ndy = 0 is anexact differential equation.

    Here are three suggestions for finding integrating factors:

    1) Try to determine m, n so that = xmyn is an integrating factor.

    2) IfMy Nx

    N

    = Q(x),

    then

    (x) = exp

    Q(x)dx

    is an integrating factor.

    3) IfNx My

    M= R(y),

    then

    (y) = exp R(y)dyis an integrating factor.

    (e) First Order Linear Equations.

    The general form of a first order linear equation is

    a0(x)y + a1(x) = b(x)

    We assume p =a1a0

    , f =b

    a0are continuous on some interval and rewrite the equation as

    y(x) +p(x)y(x) = f(x). (1.2.2)

    By inspection we see that

    = ep(x)dx

    is an integrating factor for the homogeneous equation. From (1.2.2) we obtain

    d

    dx(e

    p(x)dxy) = f(x)e

    p(x)dx

    7

  • 8/3/2019 5310_Ch1_fall00

    8/40

    and so

    y(x) = ep(x)dx

    f(x)e

    p(x)dxdx + c

    .

    Example (1.2.3) Find all solutions of

    xy y = x2.Ifx = 0, rewrite the equation as

    y 1x

    y = x (1.2.3)

    and so

    = e(1/x)dx =

    1

    |x| .

    Ifx > 0, d

    dx

    yx

    = 1

    and so y = x2 + Cx. Ifx < 0,d

    dx

    1x

    y

    = 1

    and y = x2 + Cx. Hence y = x2 + Cx gives a differentiable solution valid for all x.

    Now consider the initial value problem

    xy y = x2

    y(0) = y0.

    Ify0 = 0, we see that there are infinitely many solutions of this problem whereas ify0 = 0,there are no solutions. The problem in this latter case arises from the fact that when the

    equation is put in the form of (1.2.3) the coefficient p(x) is not continuous at x = 0. Thesignificance of this observation will be elaborated upon in the next chapter.

    (e) Reduction in Order.

    If in the differential equation the independent variable is missing, that is, the equation is of

    the form

    F(y, y,

    , y(n)) = 0, (1.2.4)

    set

    v = y =dy

    dx,

    y =d

    dx(y) =

    dv

    dx=

    dv

    dy

    dy

    dx=

    dv

    dyv,

    y =d

    dx

    dv

    dyv

    =

    dv

    dy

    dv

    dx+ v

    d

    dy

    dv

    dy

    dy

    dx

    8

  • 8/3/2019 5310_Ch1_fall00

    9/40

    = v

    dv

    dy

    2+ v2

    d2v

    dy2,

    etc.

    These calculations show that with this change of variables we obtain a differential equation

    of one less order with y regarded as the independent variable, i.e., the differential equation(1.3.1) becomes

    G(y,v ,v, v(n1)) = 0.

    If the dependent variable is missing, i.e.,

    F(x, y, y, y(n) = 0,

    again set v =dy

    dx= y to obtain

    G(x,v,v,

    v(n1)) = 0.

    (f) Linear Constant Coefficient Equations.

    A homogeneous linear differential equation with constant real coefficients of order n has theform

    y(n) + a1y(n1) + + any = 0.

    We introduce the notation D =d

    dxand write the above equation as

    P(D)y

    Dn + a1D

    (n1) + + an

    y = 0.

    By the fundamental theorem of algebra we can write

    P(D) = (Dr1)m1 (Drk)mk (D221D +21 +21)p1 (D22D +2 +2 )p,

    where

    kj=1

    mj + 2

    j=1

    pj = n.

    Lemma 1.1. The general solution of(D r)ky = 0 is

    y =

    c1 + c2x + + ckx(k1)

    erx

    and the general solution of(D2 2D + 2 + 2)ky = 0 is

    y =

    c1 + c2x + + ckx(k1)

    ex cos(x) +

    d1 + d2x + + dkx(k1)

    ex sin(x).

    Proof. Note first that (D r)erx = D(erx) rerx = rerx rerx = 0 and for k > j

    (D r) xjerx = D xjerx r xjerx = jxj1erx.9

  • 8/3/2019 5310_Ch1_fall00

    10/40

    Thus we have

    (D r)k xjerx = (D r)k1(D r) xjerx = j(D r)k1 x

    j1erx = == j! (D r)kj (erx) .Therefore, each function xjerx, for j = 0, 1, , (k 1), is a solution of the equation andby the fundamental theory of algebra these functions are linearly independent, i.e.,

    0 =k

    j=1

    cjxj1erx = erx

    kj=1

    cjxj1, for all x

    implies c1 = c2 = = ck = 0.Note that each factor (D2

    2D + 2 + 2) corresponds to a pair of complex conjugate

    roots r = i. In the above calculations we did not assume that r is real so that for a pairof complex roots we must have solutions

    e(i)x = eixex = ex (cos(x) + i sin(x)) ,

    and any linear combination of these functions will also be a solution. In particular the real

    and imaginary parts must be solutions since

    1

    2[ex (cos(x) + i sin(x))] +

    1

    2[ex (cos(x) i sin(x))] = ex cos(x)

    1

    2i[ex (cos(x) + i sin(x))]

    1

    2i[ex (cos(x)

    i sin(x))] = ex sin(x)

    Combining the above results we find that the functions

    y =

    c1 + c2x + + cnx(n1)

    ex cos(x)

    and

    y =

    d1 + d2x + + dnx(n1)

    ex sin(x).

    are solutions and, as is shown in Section 1.6, these solutions are linearly independent.

    The general solution ofP(D)y = 0 is given as a linear combination of the solutions for eachreal root and each pair of complex roots.

    Let us consider an example which is already written in factored form(D + 1)3(D2 + 4D + 13)

    y = 0

    The term (D + 1)3 gives a part of the solution as

    (c1 + c + 2x + c3x2)ex

    10

  • 8/3/2019 5310_Ch1_fall00

    11/40

    and the term (D2 + 4D + 13) corresponds to complex roots with = 2 and = 3 givingthe part of the solution

    c4e2x cos(3x) + c5e

    2x sin(3x).

    The general solution is

    y = (c1 + c + 2x + c3x2)ex + c4e2x cos(3x) + c5e2x sin(3x).

    (g) Equations of Euler (and Cauchy) Type.

    A differential equation is of Euler type if

    F(x, y, y, , y(n)) = F(y,xy, xny(n)) = 0.Set x = et so that

    y =dy

    dt=

    dy

    dx

    dx

    dt= yx,

    d2y

    dt2 dy

    dt=

    d

    dt(yx)

    yx =

    dy

    dx

    dx

    dtx + yx

    yx = yx2,

    etc.

    In this way the differential equation can be put into the form

    G(y, y, , y(n)) = 0and now the method of reduction may be applied.

    An important special case of this type of equation is the so-called Cauchys equation ,

    xny(n) + + a1xy + a2y = 0,

    orP(D)y xnDn + a1xn1D(n1) + + an y = 0.

    With x = et andD dydt

    we have

    Dy =dy

    dx=

    dy

    dt

    dt

    dx=

    1

    x

    dy

    dt xDy = Dy,

    D2y =d

    dx

    1

    x

    dy

    dt

    =

    1

    x2

    d2y

    dt2 dy

    dt

    x2D2y = D(D 1)y,

    ...

    xrDry = D(D 1)(D 2) (D r + 1)y.Thus we can write

    P(D)y = P(D)y =

    D(D 1) (D n + 1)

    + a1D(D 1) (D n + 2)+ + an1D+ any = 0.

    11

  • 8/3/2019 5310_Ch1_fall00

    12/40

    The second order case

    ax2y + bxy + cy = 0. (1.2.5)

    will arise numerous times throughout the course. We assume the coefficients are constant

    and x > 0 and as an alternative to the above approach we seek a solution in the form y = xr.

    In this way one obtains the following quadratic for the exponent r,

    ar2 + (b a)r + c = 0. (1.2.6)There are three cases to be considered:

    1) If (1.2.6) has distinct real roots r1, r2 then we obtain solutions

    xr1, xr2.

    2) Ifr1 = + i,r2 = i then solutions may be written asx+i = xeiln x = x(cos(ln x) + i sin(ln x))

    and similarly

    xi = x(cos(ln x) i sin(ln x)).Observe that a linear combination of solutions of (1.3.2) is again a solution and so we obtain

    the solutions1

    2(x+i + xi) = x cos(ln x)

    and1

    2i(x+i xi) = x sin(ln x).

    3) If (1.2.6) has repeated roots then (b a)2 = 4ac andr1 = (a b)/2a.

    We seek a second solution asy = v(x)xr1

    and observe that v must satisfyaxv + av = 0.

    Set w = v to getxw + w = 0

    and so

    w = c1/x,

    and

    v = c1 ln x + c2.Thus in the case of repeated roots we obtain the solutions

    xr1 ln x, xr1 .

    One might try to verify that c1xr1 +c2x

    r2, x(c1 cos(ln x)+c2 sin(ln x)), and c1xr1 ln x+

    c2xr1 are complete families of solutions of Eulers equation in each of the three cases respec-

    tively. That this is indeed the case will be a trivial consequence of a more general result for

    linear equations that we will prove in Chapter 3.

    12

  • 8/3/2019 5310_Ch1_fall00

    13/40

    (h) Equations of Legendre Type.

    A slightly more general class of equations than the Cauchy equations is the Legendre equa-

    tions which have the form

    P(D)y (ax + b)nDn + an1(ax + b)n1D(n1) + + a1(ax + b)D + a0 y = 0.These equations, just as Cauchy equations, can be reduced to linear constant coefficient

    equations. For these equations we use the substitution (ax + b) = et which, using the chainrule just as above, gives:

    Dy =dy

    dx=

    dy

    dt

    dt

    dx=

    a

    (ax + b)

    dy

    dt (ax + b)Dy = aDy,

    (ax + b)2D2y = a2D(D 1)y,...

    (ax + b)rDry = arD(D 1)(D 2) (D r + 1)y.Thus we can write

    P(D)y = P(D)y =

    anD(D 1) (D n + 1)

    + a1an1D(D 1) (D n + 2)+ + an1D+ any = 0.

    As an example consider the equation

    (x + 2)2D2 (x + 2)D + 1y = 0Set (x + 2) = et; then the equation can be written as

    D(D 1) D+ 1y = 0or

    D2 2D+ 1y = (D 1)2y = 0.

    The general solution to this problem is

    y(t) = C1et + C2te

    t

    and we can readily change back to the x variable using t = ln(x + 2) to obtain

    y = (x + 2)

    C1 + C2 ln(x + 2)

    .

    (i) Equations of Bernoulli Type.

    A Bernoulli equation is a differential equation of the form

    y +p(x)y = q(x)yn.

    13

  • 8/3/2019 5310_Ch1_fall00

    14/40

    I can be shown that the substitution v = y1n changes the Bernoulli equation into the lineardifferential equation

    v(x) + (1 n)p(x)v = (1 n)q(x).The special cases n = 0, n = 1 should be considered separately.

    As an example consider the differential equation

    y + y = ya+1

    where a is a nonzero constant. This equation is separable so we can separate variables toobtain an implicit solution or we can use Bernoullis procedure to derive the explicit solution

    y = (1 + ceax)1/a.

    The student should check that both results give this solution.

    (j) Equations of Clairaut Type.

    An equation in the form

    y = xy + f(y)

    for any function f is called a Clairaut equation. While, at first glance, it would appear thatsuch equation could be rather complicated, it turns out that this is not the case. As can be

    readily verified, the general solution is given by

    y = Cx + f(C)

    where C is a constant.

    As an example the equation

    y = xy +

    4 + (y)2

    has solution y = Cx +

    4 + C2.

    Sometimes one can transform an equation into Clairaut form. For example, consider the

    equation

    y = 2xy + 6y2(y)2.

    If we multiply the equation by y2 we get

    y3 = 3xy2y + 6y4(y)2.

    Now use the transformation v = y3, which implies v = 3y2y, to write the equation as

    v = xv +2

    3(v)2

    whose solution is v = Cx + 23

    C2 which gives y3 = Cx + 23

    C2.

    14

  • 8/3/2019 5310_Ch1_fall00

    15/40

    (k) Other First Order and Higher Degree Equations.

    A first order differential equation may have higher degree. Often an equation is given in

    polynomial form in the variable y and in this case we refer to the equation as having ordern if n is the highest order power of y that occurs in the equation. If we write p = y fornotational convienence, then such an equation can be written as

    pn + gn1(x, y)pn1 + + g1(x, y)p + g0(x, y) = 0. (1.2.7)

    It may be possible to solve such equations using one of the following methods (see [?]).

    Equation Solvable for p:

    It may happen that (1.2.7) can be factored into

    (p F1)(p F2) (p Fn) = 0

    in which case we can solve the resulting first order and first degree equations

    y = F1(x, y), y = F2(x, y), , y = Fn(x, y).

    This will lead to solutions

    f1(x,y,c) = 0, f2(x,y,c) = 0, , fn(x,y,c) = 0

    and the general solution is the product of the solutions since the factored equation can be

    rewritten in any form (i.e., the ordering of the terms does not matter). Thus we have

    f1(x,y,c)f2(x,y,c)

    fn(x,y,c) = 0.

    For example,

    p4 (x + 2y + 1)p3 + (x + 2y + 2xy)p2 2xyp = 0can be factored into

    p(p 1)(p x)(p 2y) = 0resulting in the equations

    y = 0, y = 1, y = x, y = 2y.

    These equations yield the solutions

    y c = 0, y x c = 0, 2y x2 c = 0, y ce2x = 0

    giving the solution

    (y c)(y x c)(2y x2 c)(y ce2x) = 0.

    15

  • 8/3/2019 5310_Ch1_fall00

    16/40

    Equation Solvable for y:

    In this case we can write the equation G(x,y,y) = 0 in the form y = f(x, p). In this casedifferentiate this equation with respect to x to obtain,

    p =dy

    dx

    = fx + fpdp

    dx

    = F(x,p,dp

    dx

    ).

    This is an equation for p which is first order and first degree. Solving this equation toobtain (x,p,c) = 0 we then use the original equation y = f(x, p) to try to eliminate the pdependence and obtain (x,y,c) = 0.

    Consider the example y = 2px +p4x2. We differentiate with respect to x to obtain

    p = 2xdp

    dx+ 2p + 2p4x + 4p3x2

    dp

    dx

    which can be rewritten as

    p + 2x

    dp

    dx(1 2p3x) = 0.

    An analysis of singular solutions shows that we can discard the factor (12p3x) and we have(p+2x

    dp

    dx) = 0 which implies xp2 = c. If we write the original equation as (yp4x2) = 2px

    and square both sides we have (y p4x2)2 = 4p2x2. With this and xp2 = c we can eliminatep to obtain (y c2)2 = 4cx.

    Equation Solvable for x:

    In this case an equation G(x,y,y) = 0 can be written as x = f(y, p). We proceed bydifferentiating with respect to y to obtain

    1p

    = dxdy

    = fy + fp dpdy

    = F(y,p, dpdy

    )

    which is first order and first degree indp

    dy. Solving this equation to obtain (x,y,c) = 0 we

    then use the original equation y = f(x, p) to try to eliminate the p dependence and obtain(x,y,c) = 0.

    As an example consider p3 2xyp + 4y2 = 0 which we can write as 2x = p2

    y+

    4y

    p.

    Differentiating with respect to y gives

    2

    p =

    2p

    y

    dp

    dy p2

    y2 + 41p yp2 dpdyor

    p 2y dpdy

    (2y2 p3) = 0.

    The term (2y2 p3) gives rise to singular solutions and we consider onlyp 2y dp

    dy

    = 0

    16

  • 8/3/2019 5310_Ch1_fall00

    17/40

    which has solution p2 = cy. We now use this relation and the original equation to eliminatep. First we have

    2x c = 4yp

    which implies

    (2x c)2 = 16y yp2

    = 16yc

    and finally 16y = c(2x c)2.

    1.3 Linear Nonhomogeneous Problems

    Variation of parameters

    Consider a nonhomogeneous second order linear equation

    L(y) = y + a1y + a2y =

    and suppose {y1, y2} is a fundamental set. Then c1y1(t)+c2y2(t) is a general solution ofL(y) = 0.A method due to Lagrange, called the method of variation of parameters, for solving L(y) = isbased on the idea of seeking a solution as

    yp(t) = c1(t)y1(t) + c2(t)y2(t).

    Then

    yp = c1y

    1 + c2y

    2 + c

    1y1 + c

    2y2.

    To simplify the algebra, we impose the auxiliary condition

    c1y1 + c

    2y2 = 0.

    Then

    yp = c1y

    1 + c2y

    2 + c

    1y

    1 + c

    2y

    2.

    If we substitute into L(y) = , we want

    c1(t)(y

    1 + a1y

    1 + a2y1) + c2(t)(y

    2 + a1y

    2 + a0y2) + c

    1y

    1 + c2y

    2 = (t).

    Note that the two parenthesized expressions are zero because y1 and y2 are solutions of the homo-

    geneous equation. Thus we need to solve

    c1y1 + c

    2y2 = 0

    c1y

    1 + c

    2y

    2 = .

    By Cramers Rule

    c1(t) =y2(t)1(t)W(y1, y2)(t)

    , c2(t) =y1(t)(t)

    W(y1, y2)(t).

    17

  • 8/3/2019 5310_Ch1_fall00

    18/40

    Thus a particular solution is given as

    yp(t) = y1(t)tt0

    y2(s)(s)

    W(s)ds + y2(t)

    tt0

    y1(s)(s)

    W(s)ds

    = tt0 y1(s)y2(s) y1(s)y2(s)W(y1, y2)(s) (s) ds=

    tt0

    g(x, s)(t) ds.

    g(t, s) is called a Fundamental Solution.The same method works, if not as smoothly, in the general case. Consider the nth order case

    L(y) = y(n) + a1y(n1) + + an1y(1) + any =

    and let {y1, . . . , yn} be a fundamental set of solutions of the homogeneous problem L(y) = 0. Asin the last section, given a basis of solutions

    {yj}

    n

    j=1of the homogeneous problem Ly = 0 we seek

    a solution ofL(y) = in the form

    yp(t) = u1(t)y1(t) + + un(t)yn(t).We seek a system of equations that can be solved to find u1, . . . , u

    n. To this end we note that

    by applying the product rule to yp and, collecting terms carefully, we can conclude that

    u1y1 + u

    2y2 + + unyn = 0 yp = u1y1 + u2y2 + + unyn

    u1y

    1 + u

    2y

    2 + + unyn = 0 yp = u1y1 + u2y2 + + unyn

    u

    1y

    1 + u

    2y

    2 + + u

    ny

    n = 0 y

    p = u1y

    1 + u2y

    2 + + uny

    n

    ... ...

    u1y(n2)1 + u

    2y(n2)2 + + uny(n2)n = 0 y(n1)p = u1y(n1)1 + u2y(n1)2 + + uny(n1)n

    u1y(n1)1 + u

    2y(n1)2 + + uny(n1)n = y(n)p = u1y(n)1 + u2y(n)2 + + uny(n)n +

    which implies

    L(yp) = u1L(y1) + u2L(y2) + + unL(yn) + = .Now we note that the system of equations becomes

    y1 y2 yny1 y

    2 yn

    ...... ...

    y(n1)1 y

    (n1)2 y(n1)n

    u1

    u2

    ...

    un

    =

    0

    0

    ...

    .

    18

  • 8/3/2019 5310_Ch1_fall00

    19/40

    The determinant of the coefficient matrix is nonvanishing since it is the Wronskian W(t) of a setof linearly independent solutions to an n-order linear differential equation (see equation (1.5.4) inSection 1.5). Applying Kramers rule we can write the solutions as

    uk(t) =Wk(t)

    W(t)

    , k = 1,

    , n

    where Wk(t) is the determinant of the matrix obtained from the coefficient matrix by replacing the

    kth column

    i.e.,

    yk y

    k y(n1)kT

    by the vector

    0 0 TIf we define

    g(t, s) =n

    k=1

    yk(t)Wk(s)

    W(s)

    then a particular solution of L(y) = is

    yp = t

    t0 g(t, s) (s) ds.

    Method of Undetermined Coefficients

    As we have already learned, the method of variation of parameters provides a method of rep-

    resenting a particular solution to a nonhomogeneous linear problem

    Ly = y(n) + a1y(n1) + + a(n1)y(1) + any = f

    in terms of a basis of solutions {yj}nj=1 of the linear homogeneous problem. In the special casein which the operator L has constant coefficients, we have just seen that it is possible to constructsuch a basis of solutions for the homogeneous problem. Thus given any f we can write out aformula for a particular solution in integral form

    yp(t) =

    tt0

    g(t, s)f(s) ds.

    Unfortunately, the method of variation of parameters often requires much more work than is

    needed. As an example consider the problem

    Ly = y

    + y

    + y

    + y = 1y(0) = 0, y(0) = 1, y(0) = 0.

    Example 1.1. For the homogeneous problem we have

    (D3 + D2 + D + 1)y = (D + 1)(D2 + 1)y = 0

    so we can take

    y1 = cos t, y2 = sin t, y3 = et.

    19

  • 8/3/2019 5310_Ch1_fall00

    20/40

    Thus the wronskian is

    W(t) =

    cos t sin t et

    sin t cos t et

    cos t

    sin t et

    and we can apply Abels theorem to obtain

    W(t) = W(0)et01 ds =

    1 0 1

    0 1 11 0 1

    et = 2et.

    Thus by the variation of parameters formula yp = u1y1 + u2y2 where

    u1 =1

    2et

    0 sin t e

    t

    0 cos t et

    1 sin t et

    = 1

    2(cos t + sin t),

    which implies

    u1(t) =1

    2(cos(t) sin(t)).

    Similarly, we obtain

    u2 =1

    2

    (cos t

    sin t), u3(t) =

    1

    2

    et,

    which imply

    u2 =1

    2(cos t + sin t), u3(t) =

    1

    2et,

    So we get

    yp = u1y1 + u2y2

    =1

    2(cos t sin t)cos t + 1

    2(sin t + cos t)sin t +

    1

    2etet

    = 1

    Wow! In retrospect it would have been easy to see that yp = 1 is a particular solution.Now the general solution is

    y = 1 + c1 cos t + c2 sin t + c3et

    and we can apply the initial conditions to determine the constants which yields

    y = 1 +1

    2(sin t cos t et).

    20

  • 8/3/2019 5310_Ch1_fall00

    21/40

    We note that if we were to apply the method for finding a particlar solution with the properties

    yp(0) = 0, y

    p(0) = 0, y

    p(0) = 0,

    (as given in the proof of the n order case), then we would get

    yp(t) = 1 12

    (cos t + sin t + et).

    We note that the second term is part of the homogeneous solution so it can be excluded.

    In any case this is a lot of work to find such a simple particular solution. In case the function fis a linear combination of:

    1. polynomials,

    2. polynomials times exponentials or,

    3. polynomials times exponentials times sine or cosine,

    i.e., if f is a solution of a linear constant coefficient homogeneous differential equation, one canapply the method of undetermined coefficients.

    The method goes as follows:

    1. Let L = P(D) = Dn +n

    j=1

    ajDnj

    2. Let Ly = 0 have general solution yh =n

    j=1cjyj .

    3. Assume that M = Q(D) = Dm +m

    j=1

    bjDmj is a constant coefficient linear differential

    operator such that Mf = 0.

    4. Then L = ML is a polynomial constant coefficient differential operator.5. Notice that ifyp is any solution ofLy = f and yh is the general solution of the homogeneous

    problem, then we have

    L(yh + yp) = 0. (1.3.1)6. On the other hand we can write the general solution of this problem by simply factoring L

    and applying the results of the previous section.

    7. Note that the solution yh =n

    j=1

    cjyj is part of this general solution.

    21

  • 8/3/2019 5310_Ch1_fall00

    22/40

    8. So let us denote the general solution by

    y =n

    j=1

    cjyj +m

    j=1

    djwj. (1.3.2)

    9. Now we also know, by the variation of parameters formula, that there exists a particularsolution ofLyp = f.

    10. But since we know the general solution of (1.3.1) is (1.3.2), this means that any particular

    solution must also be a part of the full general solution of the large homogeneous problem,

    i.e., yp =n

    j=1

    cjyj +m

    j=1

    djwj .

    11. We know that Lyh = 0 so we can choose yp =m

    j=1djwj .

    12. We now substitute this expression into the original equation and solve for the constants {dj}.Example 1.2. Example: Ly = (D2 2D + 2)y = t2et sin(t):

    The general solution of the homogeneous equation Ly = 0 is

    yh = c1et cos(t) + c2e

    t sin(t)

    According to the above disscussion we seek a differential operator M so that

    M(t2et sin(t)) = 0.

    We immediately choose

    M = (D2 2D + 2)3and we need to compute the general solution to the homogeneous problem

    MLy = (D2 2D + 2)3(D2 2D + 2)y = (D2 2D + 2)4y = 0,which implies

    y = (c1 + c2t + c3t2 + c4t

    3)et cos(t) + (d1 + d2t + d3t2 + d4t

    3)et sin(t).

    If we now remove the part of this function corresponding to the solution of the homogeneous

    problem Ly = 0, we have

    yp = (at3 + bt2 + ct)et cos(t) + (dt3 + f t2 + gt)et sin(t)

    After a lengthy calculation the first derivative

    y =

    (d + a) t3 + (3 a + b + f) t2 + (2 b + c + g) t + c

    et cos(t)

    +

    (a + d) t3 + (b + f + 3 d) t2 + (c + g + 2 f) t + g et sin(t)22

  • 8/3/2019 5310_Ch1_fall00

    23/40

    and the second derivative

    y =

    2 dt3 + (6 d + 6 a + 2 f) t2

    + (6 a + 4 b + 4 f + 2 g) t + 2 b + 2 c + 2 g

    et cos(t)

    + 2 at3 + (6 a + 6 d 2 b) t2+ (4 b + 4 f + 6 d 2 c) t + 2 f + 2 g 2 cet sin(t)Plugging all of this into the equation yields

    y 2y + 2y = 6 dt2 + (6 a + 4 f) t + 2 g + 2 b et cos(t) +6 at2 + (4 b + 6 d) t 2 c + 2 f et sin(t)Equating coefficients with the right hand side leads to the equations

    6 d = 0

    6 a + 4 f = 02 g + 2 b = 0

    6 a = 14 b + 6 d = 02 c + 2 f = 0

    which have the solutions a = 1/6, b = 0, c = 1/4, d = 0, f = 1/4 andg = 0. Thus, a particularsolution of the equation is

    y = t3

    6

    +t

    4 et cos(t) +t2

    4

    et sin(t)

    The following table contains a guide for generating a particular solution when one applies the

    method of undetermined coefficients. In particular, consider

    Ly = f.

    If f(t) = seekyp(t).

    Pm(t) = c0tm + + cm xs(a0tm + + am)

    Pm(t)et ts(a0t

    m +

    + am)e

    t

    Pm(t)et

    sin tcos t

    tset

    (a0tm + + am)cos t

    +(b0tm + + bm)sin t

    Example 1.3. Returning to Example 1.1 we see that the operator M = D annihilates f = 1 sowe seek a particular solution yp = 1.

    23

  • 8/3/2019 5310_Ch1_fall00

    24/40

    1.4 Some Numerical Methods for ODEs

    Method: Collocation Method (a Weighted Residual Method)

    Given a differential equation L[u] = 0 for u() with Q (Q some domain) and boundaryconditions B[u] = 0, we seek an approximate solution u() = w(; ) where = {j}Nj=1 is a setof parameters and B[w] = 0 for all choices of.We try to determine the by requiring that the equation be satisfied by w at a set of pointsS = {j}Nj=1 Q. Thus we arrive at a system ofN equations in the N unknowns :

    L[w(j, )] = 0, j = 1, , N.

    Example 1.4. Consider

    L[u] = u + u + x = 0, (1.4.1)

    u(0) = 0, u(1) = 0

    Suppose we choose to approximate the solution by

    w(x) = 1x(1 x) + 2x(1 x2).Note that w satisfies the boundary conditions. A straightforward calculation gives:

    L[w(x)] = 1(2 x + x2) 2(5x + x3) + x.Applying our collocation method at x1 = 1/3 and x2 = 2/3, we obtain the 2 2 system

    (48/27)1 + (46/27)2 = 1/3

    (48/27)1 + (98/27)2 = 2/3

    which has solutions 1 = 9/416 and 2 = 9/52. Thus we obtain

    w(x) =9

    416x(1 x) + 9

    52x(1 x2).

    The exact solution to this problem is

    u(x) =sin(x)

    sin(1) x

    and the maximum difference between y and w on [0, 1] occurs at x = .7916 with a value of.00081.

    Method: Taylor Series

    This method yields an approximate solution to a differential equation near a single point a. Themethod is applicable for differential equations in which all expressions are analytic functions of

    the variables. The idea is to use the differential equation to obtain the coefficients in a Taylor series

    about the point a. We note that this is really an application of the famous Cauchy-Kovalevskayatheorem.

    24

  • 8/3/2019 5310_Ch1_fall00

    25/40

    Example 1.5. We seek an approximate solution to

    y(x) = F(x, y), y(a) = y0

    in the form

    y(x) =

    Nj=0

    y(j)

    (a)j!

    (x a)j.

    The idea is to use the differential equation and initial condition to find the Taylor coefficients.

    y(a) = y0

    y =F(x, y) implies

    y(a) = F(a, y0)

    y(x) =Fx(x, y) + Fy(x, y)yx implies

    y

    (a) = Fx(a, y0) + Fy(a, y0)F(a, y0),etc.

    Consider

    y = x2 y2,y(0) = 1

    A straightforward calculation yields:

    y

    = x2

    y2

    ,y = 2x 2yy y = 2 2(y)2 2yy y = 6yy 2yy

    from which we immediately obtain for a = 0

    y(0) = 1,y(0) = 2

    y(0) = 4y

    (0) = 20.

    Thus we obtain the approximate solution

    y = 1 x + 22!

    x2 43!

    x3 +20

    4!x4

    = 1 x + x2 23

    x3 +5

    6x4

    25

  • 8/3/2019 5310_Ch1_fall00

    26/40

    Eulers Method:

    This method, which can be derived many different ways, is not particularly practical but is by

    far the simplest method and the starting point for many more complicated methods.

    Unlike the methods discussed in the earlier section, this method does not produce a function

    that can be evaluated at any point but rather is gives approximate values to the solution at points in

    the form {(tj, yj)}M+1j=1 wherey(tj) yj , a = t1 < t2 < < tM < tM+1 = b.

    While any mesh can be used, we will select uniform mesh

    tk = a + h(k 1), k = 1, , (M + 1), h = (b a)M

    .

    Assuming that y, y and y are continuous we can apply Taylors theorem at t1 to obtain

    y(t) = y(t1) + y

    (t1)(t t1) + y

    (c1)

    (t

    t1)

    2

    2 , c1 [t1, t].Using y(t1) = f(t1, y(t1)) and h = (t2 t1) we get an approximation for y at t2

    y(t2) = y(t1) + hf(t1, y(t1)) + y(c1)

    h2

    2.

    For h sufficiently small we writey2 = y1 + hf(t1, y1)

    to obtain an approximate value at t2. Repeating, we have

    tk+1 = tk + h, yk+1 = yk + hf(tk, yk), k = 1, (M + 1) Eulers Method.Assuming that at each step we begin at exactly y(tk) (which wont be quite true), we define thelocal truncation error (LTE) to be the error obtained at a given step. In this case the LTE is

    y(ck)h2

    2.

    Summing the LTEs all the way to tM we get, roughly anyway, the so-called Global TruncationError (GTE). In this case we have

    M

    k=1 y(2)(ck)h2

    2 y(2)

    (c)Mh

    2

    =

    y(2)(c)

    2 M

    (b

    a)

    M h = O(h).

    Thus we see that the GTE, E(h), for Eulers method is

    E(h) Ch.

    From this we have

    E

    h

    2

    C(h/2) = 1

    2Ch 1

    2E(h).

    26

  • 8/3/2019 5310_Ch1_fall00

    27/40

    So cutting the step size in half roughly reduces the GTE by a factor of 1/2.Note that if the equation is y = f(t) with y(a) = 0 then this method give

    y(b) M

    k=1f(tk)h

    which is a Riemann sum approximation to the integral

    ba

    f(t) dt.

    Modified Eulers Method:

    Once again we consider (??) and apply the fundamental theorem of calculus to gett2t1

    f(t, y(t)) dt =

    t2t1

    y(t) dt = y(t2) y(t1).

    This implies that

    y(t2) = y(t1) +t2t1

    f(t, y(t)) dt.

    Now use the trapezoid rule to evaluate the integral with step size h = t2 t1 to obtain

    y(t2) y(t1) + h2

    [f(t1, y(t1)) + f(t2, y(t2))].

    Unfortunately, we do not know y(t2) on the right side, so we use, for example, Eulers method toapproximate it: y(t2) y(t1) + hf(t1, y(t1)). We thus obtain a numerical procedure by denotingy(tk) by yk for each k we set

    pk = yk + hf(tk, yk), tk+1 = tk + h,

    yk + 1 = yk +h

    2[f(tk, yk) + f(tk+1, pk)] .

    Note that if the equation were y = f(t) with y(0) = 0 then we would have

    y(b) h2

    Mk=1

    [f(tk) + f(tk+1)]

    which is exactly the trapezoid rule for numerical quadrature.

    In this case the LTE for the trap rule is

    y(ck) h3

    12

    and the GTE is

    Mk=1

    y(2)(ck)h3

    12 y

    (2)(c)(b a)12

    h2 = O(h2).

    27

  • 8/3/2019 5310_Ch1_fall00

    28/40

    Thus we see that the GTE E(h) for Modified Euler Method is

    E(h) Ch2.From this we have

    Eh2 C(h/2)2 = 14Ch2 12E(h).So cutting the step size in half roughly reduces the GTE by a factor of 1/4. This is sometimescalled Huens method. See the codes a9_2.m and huen.m.

    Taylors Method:

    Assuming that y has derivatives of all order we could, again by Taylors theorem, for any Nwrite

    y(t + h) = y(t) + y(t)h +y(2)(t)h2

    2+ + y

    (N)(t)hN

    N!(1.4.2)

    with a LTE ofy(N+1)(c)hN+1

    (N + 1)!.

    We can adapt this result to each interval [tk, tk+1] to obtain a numerical procedure

    yk+1 = yk + d1h +d2h

    2

    2+

    d3h3

    6+ + dNh

    N

    N!

    where dj = y(j)(tk) for j = 1, 2, , N. In this case the LTE is O(hN+1) and the GTE is

    EN(h) ChN = O(hN). Thus we have

    ENh

    2 = 1

    2NEN(h).

    Computing the values dk for a specific example is not to bad. Consider, for example, (see theexercises)

    y =t y

    2

    y(2) =2 t + y

    4

    y(3) =2 + t y

    8y(4) =

    2 t + y16

    from which we readily compute the values dj .There is a Taylors matlab code named taylor.m taken from John Mathews book Chapter 9. The

    file a9_3.m runs an example.

    Runge-Kutta Methods:

    28

  • 8/3/2019 5310_Ch1_fall00

    29/40

    The dissadvantage of Taylors method is obvious. We have to do some analytic calculations on

    every problem. To eliminate this difficulty and still have a higher order method we consider the

    one-step Runge-Kutta methods which are related to Taylors method but do not require separate

    calculations.

    We give a brief discussion of this method in the second order case. The fourth order case is

    more common but the algebra is very messy.In the second order case the idea is to look for a formula

    yk+1 = yk + hF(tk, yk, h , f )

    with F in the form

    F(t ,y,h,f) = 1f(t, y) + 2f(t + h,y + hf(t, y).

    where 1, 2, , are to be determined.We apply Taylors theorem in two variable (t, y) applied to the second term (the term containing

    2) to obtain an approximation through the second derivative terms:

    F(t ,y,h,f) = 1f(t, y)+2

    f(t, y)+h

    ft+f fy

    +h21

    22ftt+ftyf+

    12

    2f2fyy

    +O(h3)

    where

    ft =f

    t, fy =

    f

    y, etc.

    Similarly, we have

    y = f(t, y)

    y = ft + fyy = ft + fyf

    y = ftt + 2ftyf + fyyf2 + ftfy + f

    2y f.

    Now if we expand y(t) in a Taylor polynomial expansion about tk we have

    y(tn + h) = y(tn) +y(tn)

    1h +

    y(tn)

    2h2 +

    y(tn)

    6h3 + O(h4).

    Let us denote y(k)(tn) = y(k)n , then the LTE is

    LT E =y(tn+1) y(tn) hF(tn, y(tn), h; f)

    =hy(1)n +h2

    2y(2)n +

    h3

    6y(3)n + O(h

    4) hF(tn, y(tn), h; f)=h[1

    1

    2]f + h

    2[(1/2

    2)ft + (1/2

    2)fyf]

    + h3[(1/6 1/222)ftt + (1/3 2)ftyf + (1/6 1/222)fyyf2+ 1/6fyft + f

    2y f] + O(h

    4)

    For general f we cannot eliminate the third term but by choosing

    1 + 2 = 1

    2 = 1/2

    2 = 1/2

    29

  • 8/3/2019 5310_Ch1_fall00

    30/40

    we can eliminate the first two terms. This implies that the LTE is O(h3). The system of equationsis underdetermined but we can write

    1 = 1 2, = = 122

    for arbitrary 2.

    Special Cases:

    For 2 = 1/2 we get the modified Eulers Method

    yn+1 = yn +h

    2[f(tn, yn) + f(tn+1, yn + hf(tn, yn))].

    For 2 = 1 we get the so-called Midpoint Method

    yn+1 = yn +h

    2[f(tn + h/2, yn) + f(tn+1, yn + (h/2)f(tn, yn))].

    We now consider choosing 2 in order to minimize the LTE at the O(h3) level. Note that

    LT E = c(f, 2)h3 + O(h4)

    where

    c(f, 2) =

    16

    182

    ftt +1

    3 1

    42

    ftyf +1

    6 1

    82

    fyyf

    2 +1

    6fyft +

    1

    6f2y f

    .

    Now applying the Cauchy-Schwartz inequality we get

    |c(f, 2)| c1(f)c2(2),

    c1(f) =

    f2tt + f2tyf

    2 + f2yyf4 + f2y f

    2t + f

    4y f

    2

    1/2,

    c2(2) =

    16

    182

    2+1

    3 1

    42

    2+1

    6 1

    82

    2+

    1

    18

    1/2.

    We can compute the minimum ofc2() to obtain

    2 = 3/4, with c2(3/4) = 1/

    18.

    The resulting method is usually refered to as Huens Method which takes the form

    yk+1 = yk +h

    4

    f(tk, yk) + 3f(tk +

    2

    3h, yn +

    2

    3hf(tk, yk))

    .

    30

  • 8/3/2019 5310_Ch1_fall00

    31/40

    There is a Huens matlab code named huen.m in Chapter 9 taken from John Mathews book.

    This same analysis can be carried out for higher order methods but the algebra becomes very

    messy. One method that was very popular until about 1970 was the 4th order Runge-Kutta (RK4)

    method with a LTE ofO(h5). If f does not depend on y this method reduces to simpson rule forquadrature. The RK4 method goes like this

    f1 = f(tk, yk)

    f2 = f(tk + h/2, yk + h/2f1)

    f3 = f(tk + h/2, yk + h/2f2)

    f4 = f(tk + h, yk + hf3)

    yk+1 = yk +h

    6(f1 + 2f2 + 2f3 + f4).

    Assignment 3:

    1. Program Example 1.4 using Maple, plot y and w and check the error given in the example.

    2. Try using two different functions for example to approximate the function w in Example 1.4.

    3. Try using different points x1 and x2, e.g. x1 = 1/4 and x2 = 3/4 in Example 1.4.

    4. Use Maple to carry out the Taylor method up to order n for y = F(x, y) with y(a) = y0.Apply your program with n = 4, 6, 8 and plot the results. Use

    a) F(x, y) = y2, y(1) = 1. How do your answers compare with the exact solution on[1, 2]?

    b) Same question with F(x, y) = 3x2y, y(0) = 1 on [0, 1].

    5. Apply Eulers method to the following:

    a) y = t2 y on [0, 2] with y(0) = 1. The exact answer is y = et + t2 2t + 2. Useh = .2, .1, .05.

    b) y =1

    1 t2 on [0, 1] with y(0) = 1. Note that the exact answer doesnt exist on thisinterval. Try the methods anyway with h = .1, .05, .025.

    c) y = e2t 2y on [0, 2] with y(0) = 1/10 and h = .2, .1, .05. Here y = e2t/10 +te2t.

    6. Apply the midpoint method, the modified euler method and Huens method for the following

    problem with the same h in each case.

    y = y + t + 1, 0 t 1, y(0) = 1.What do you observe, i.e., how do they compare?

    7. Solve the equation y = 1 + y2 using any method above and Taylors method of order 2, 4and 6 on [0, 5] with y(0) = 1 and compare your answers to y = tan(t /4).

    31

  • 8/3/2019 5310_Ch1_fall00

    32/40

    1.5 Wronskian and Linear Independence

    Consider an nth order linear nonhomogeneous equation

    y(n) + a1(t)y(n1) + a2(t)y

    (n2) + + an(t)y = (t) (1.5.1)

    assuming ai(t), bi(t) C[a, b].Let

    L() = dn()dtn

    + a1dn1

    dtn1() + + an().

    Then (1.5.1) may be written as the linear nonhomogeneous system

    L(y) = . (1.5.2)

    The operator L is said to be linear since

    L(cy1(t) + c2y2(t)) = c1L(y1(t)) + c2L(y2(t)).

    It easily follows that the set of solutions of the homogeneous linear system

    L(y) = 0 (1.5.3)

    is a vector space.

    The Wronskian of{yj(t)}nj=1 is defined as

    W(t) W

    {yj(t)}nj=1

    (t) =

    y1(t) yn(t)...

    ......

    y(n1)

    1 (t) y(n1)

    n (t)

    . (1.5.4)

    Theorem 1.1. [Abels Formula] Ify1, . . . yn are solutions of (1.5.3) , andt0 (a, b), then

    W(t) = W(t0)exp

    tt0

    a1(s)ds

    .

    Thus the Wronskian of{y1, yn} is never 0 or identically 0.Proof. We compute

    W(t) =

    y1 yny1 yn...

    ......

    y(n1)1 y(n1)1

    +

    y1

    yn

    y1 yny1 yn...

    ......

    y(n1)1 y(n1)n

    + +

    y1

    yn

    y11...

    ......

    y(n2)1 y(n2)ny(n)1 y(n)n

    32

  • 8/3/2019 5310_Ch1_fall00

    33/40

    =

    y1 yny1 yn...

    ......

    y(n2)1

    y(n2)n

    n

    j=1

    jy(nj)1

    nj=1

    jy(nj)n

    =

    y1 yny1 yn...

    ......

    a1(t)y(n1)1 a1(t)y(n1)n

    = a1(t)W(t).

    Hence

    W(t) = Kexp tt0

    a1(ts)dsor, more explicitly,

    W(t) = W(t0)exp

    tt0

    a1(s)ds

    .

    Definition 1.1. A collection of functions {yi(t)}ki=1 is linearly independent on (a, b) ifki=1

    ciyi(t) = 0 for all x (a, b) cj = 0 for j = 1, , n.

    Otherwise we say the set{yi(t)} is linearly dependent.Theorem 1.2. Suppose y1, . . . , yn are solutions of (1.5.2). If the functions are linearly dependenton (a, b) then W(t) = 0 for all t (a, b). Conversely, if there is an t0 (a, b) so thatW(t0) = 0,then W(t) = 0 for all t (a, b) and the yi(t) are linearly dependent on (a, b).Proof. If the {yi(t)} are linearly dependent, then there are ci(t) not all zero such that

    i ciyi(t) = 0 for all t (a, b)

    ciy(k)i (t) = 0 for all t and any k.

    Hence, defining

    M(t) =

    y1(t) yn(t)

    .... . .

    ...

    y(n1)1 (t) y(n1)n (t)

    , C =c1...

    cn

    ,33

  • 8/3/2019 5310_Ch1_fall00

    34/40

    the system can be written as

    M(t)C = 0

    and since C = 0 we see that M is singular and therefore

    W(t) = det(M(t))0 for all x

    (a, b).

    Conversely, ifdet(M(t)) = W(t0) = 0 then

    M(t)C = 0

    has a nontrivial solution. For this choice ofcis, let

    y(t) =

    ciyi(t).

    Then

    y(t0) = 0, y(t0) = 0,

    , y(n1)(t0) = 0.

    and since y is a solution of Ly = 0, from the uniqueness part of the fundamental existence uniqe-ness theorem, we must have y(t) = 0 for all x (a, b).Example 1.6. Considery1 = t

    2, y2 = t|t|. Then

    W(t) =

    t2 t|t|2x 2x sgn(t) = 2t3 sgn(t) 2t2|t| 0.

    However, y1(t), y2(t) are not linearly dependent. For suppose

    c1y1(t) + c2y2(t) = 0, for all t.

    Then

    c1t + c2|t| = 0 for all t.Ift > 0,

    c1t + c2t = 0 c1 = c2while fort < 0,

    c1t c2t = 0 c1 = c2.Hence c1 = c2 = 0 and so y1, y2 are linearly independent on any interval (a, b) containing 0.

    Thus y1, y2 are not solutions of a linear homogeneous 2nd order equation on (a, b).

    1.6 Completeness of Solution for Constant Coefficient ODE

    We have already learned, in Section 1.2, how to find a set of n solutions to any homogeneousequation of the form Ly = 0 with L = D(n) + a1D

    (n1) + + an1D + an. Namely, we factorthe operator into a product of factors (D r)k and (D2 2D + 2 + 2)m. Having done this we

    34

  • 8/3/2019 5310_Ch1_fall00

    35/40

    simply observe that the general solution of the associated homogeneous problem for each of these

    types of operators is easy to write out. Namely, we have

    (D r)ky = 0 y =k

    j=1cjt

    j1 ert (1.6.1)

    (D2 2D + 2 + 2)my = 0 y =k

    j=1

    cjtj1 et cos(t)

    +k

    j=1

    cjtj1 et sin(t) (1.6.2)

    In the case that the coefficients ai are constant, it is possible to describe the solutions explicitlyby simply solving the homogeneous equation for each factor and adding these terms together.

    What we have not proved is that all such solutions give a basis for the null space of L, i.e., wehave not shown that the soutions are linearly independent. To show that these solutions are linearlyindependent is not really difficult but to do it completely rigorously and carefully is a bit lengthy.

    First we note that

    Lemma 1.2. If = + i is a real ( = 0 ) or complex number, then

    y =

    k

    j=1

    cjtj1

    et

    is the complete solution of(D

    )ky = 0.

    Proof. Showing that the solutions of(D )ky = 0 are linearly independent amounts to showingthat

    kj=1

    cjtj1

    et = 0 for all t R cj = 0, j = 1, 2, , k.

    But, on noting that et = 0 and dividing, this result is obvious from the fundamental theorem ofalgebra which says that a polynomial of degree k has exactly k zeros.

    Lemma 1.3. If1 = 2 are two complex numbers and

    p(t) =

    kj=1

    cjtj1 andq(t) =

    j=1

    djtj1,

    are two polynomials, then

    p(t)e1t = q(t)e2t for all t R p(t) = 0, q(t) = 0.

    35

  • 8/3/2019 5310_Ch1_fall00

    36/40

    Proof. To see that this is true we first multiply both sides of the equation by e1t so that

    p(t) = q(t)e(21)t for all t R.Now consider the cases in which < 0, > 0 and = 0 where (2 1) + i. If < 0then (using LHospitals rule in the first term)

    limt+

    q(t)e(21)t = 0 while limt+

    p(t) = (as ck is pos. or neg.).

    So that we must have p(t) 0 and then q(t) 0. If > 0 we repeat the same argument with thefirst limit replace by t . Finally, in the case = 0 we divide both sides of the equation byq(t) and collect real and imaginary parts to obtain

    r1(t) + ir2(t) =p(t)

    q(t)= eit = cos(t) + i sin(t)

    where r1(t) and r1(t) are rational functions with real coefficients. Equating real and imaginaryparts we see that this would imply that

    r1(t) = cos(t), r2(t) = sin(t)

    which is impossible unless r1(t) = 0 and r2(t) = 0 since the right side has infinitely manyzeros while the left can have only a finite number. This in turn implies that p(t) = 0 and alsoq(t) = 0.

    Lemma 1.4. If > 0, 1 = 2 are real or complex numbers and(D 2)

    p(t)e1t

    = 0

    where p(t) is a polynomial, then p(t) 0.Proof. We know that every solution of (D 2)y = 0 can be written as y =

    q(t)e2t

    for

    some polynomial q(t) of degree at most ( 1). So the equestion is whether or not there exists apolynomial q(t) so that

    p(t)e1t = q(t)e2t.

    We note that this is only possible when p(t) = 0 and q(t) = 0 by Lemma 1.3.

    Lemma 1.5. Ifp(t) is any polynomial of degree less than or equal (n 1) then(D

    1)

    m p(t)e2t = q(t)e2twhere q(t) is a polynomial of degree at most the degree ofp(t).

    Proof. Consider the case m = 1. We have

    (D 1)p(t)e2t

    = q(t)e2t

    where q(t) = p(t) + (2 1)p(t) which is a polynomial of degree p(t). You can now iterate thisresult for general > 0.

    36

  • 8/3/2019 5310_Ch1_fall00

    37/40

    Lemma 1.6. IfL(y) = y(n) +a1y(n1) + +any has real coefficients andp(r) = rn+a1r(n1) +

    + an. Then p(z) = p(z) for all z C. Therefore ifp( + i) = 0 then p( i) = 0.Proof. For every z1, z2 C, we have z1 + z2 = z1 + z2 and z1z2 = z1z2 which also implieszn1 = z1

    n.

    From Lemma 1.6 we know that for a differential operator L with real coefficients, all complexroots must occur in complex conjugate pairs (counting multiplicity) and from Lemma 1.2 we know

    that for a pair of complex roots = + i each of multiplicity k, a set of2k linearly independentsolutions is given for j = 0, , (k 1) by

    tjet = tjet

    cos(t) + i sin(t)

    .

    tjet = tjet

    cos(t) i sin(t)

    .

    From this we see that there is a set of real solutions given as a linear combination of these solutions

    by

    tjet cos(t) =1

    2tj

    et + et

    ,

    and

    xjet sin(t) =1

    2itj

    et et

    .

    We already know from Lemma 1.2 that tjet and tjet are linearly independent. Suppose wehave a linear combination

    cjtjet cos(t) + djt

    jet sin(t) = 0.

    This would imply that(cj dji)

    2tjet +

    (cj + dji)

    2tjet = 0,

    but since these functions are independent this implies

    (cj dji) = 0, (cj + dji) = 0, which implies cj = dj = 0.Combining these results we have the main theorem:

    Theorem 1.3. If L = y(n) + a1y(n1) + + any has real coefficients and we assume that the

    polynomial p(r) = rn + a1r(n1) + + an has zeros given by

    r1, r1, r2, r2, , r, r, r2+1, , rswhere rj = j + ji, j = 1, , , j , j R, j = 0 andrj forj = 2 + 1, , s are real. Letrj have multiplicity mj for all j. Then ifpj(t) and qj(t) denote arbitrary polynomials (with realcoefficients) of degree (mj 1), the general solution ofLy = 0 can be written as

    y =

    j=1

    ejt [pj(t) cos(jt) + qj(t)sin(jt)] +s

    j=2+1

    pj(t)erjt.

    37

  • 8/3/2019 5310_Ch1_fall00

    38/40

    Proof. We need only prove that all the functions making up this general linear combination are

    linearly independent. We already know that each particular term, i.e., a term of the form pj(t)erjt

    or ejt [pj(t)cos(jt) + qj(t)sin(jt)] consists of linearly independent functions. Note also thatby rewriting this last expression in terms of complex exponentials, we have the functions pj(t)e

    rjt

    and pj(t)erjt. Thus let us suppose that we have a general linear combination of the form

    mj=1

    pj(t)erjt = 0, for some m,

    where all we assume is that ri = rj for i = j. We want to show this implies that every polyniomialpj 0. We prove this by induction:

    1. The case s = 1 have already done.

    2. Assume that the statement holds for s = k 1, i.e.,k1

    j=1pj(t)e

    rjt = 0 implies that every

    pj(t) 0.

    3. Assume that

    kj=1

    pj(t)e

    rjt

    = 0. We now apply (D rk)mk to this expression and note

    that (D rk)mkpk(t)e

    rkt

    = 0 so that the sum reduces to

    k1j=1

    (D rk)mk

    pj(t)erjt

    = 0.

    By Lemma 1.5 this sum can be written as

    k1j=1

    qj(t)erjt = 0

    where

    (D rk)mkpj(t)e

    rjt

    = qj(t)erjt

    By the induction hypothesis we have qj(t) = 0 for all j = 1, , (k 1). But this impliesthat

    (D rk)mk pj(t)e

    rjt

    = 0, j = 1, , (k 1)

    which by Lemma 1.4 implies that pj(t) = 0 for all j = 1, , (k 1). Finally we see thatthe original expression reduces to

    pk(t)erkt

    = 0

    which implies that pk(t) = 0.

    38

  • 8/3/2019 5310_Ch1_fall00

    39/40

    Assignment 1

    1. Solve the differential equations.

    (a) y + 2xy + xy4 = 0

    (b) y =y

    x+ sin

    y xx

    (c) (2x3y2 + 4x2y + 2xy2 + xy4 + 2y)dx + 2(y3 + x2y + x)dy = 0

    (d) (y yx)2 = 1 + (y)2(e) x2yy + x2(y)2 5xyy + 4y2 = 0(f) y = (5x + y)2 4(g) xydx + (x2 + y2)dy = 0

    (h) y =yx

    2 8 xy

    2

    (i) x =y

    3y 2yy2

    (j) (y)4 (x + 2y + 1)(y)3 + (x + 2y + 2xy)(y)2 2xyy = 02. Solve 1 + yy + (y)2 = 0.

    3. Consider a differential equation M(x, y) dx + N(x, y) dy = 0 and assume that there is aninteger n so that M(x, y) = nM(x, y), N(x,y) = nN(x, y) (i.e., the equation ishomogeneous).

    Then show that = (xM + yN)1 is an integrating factor provided that (xM + yN) is notidentically zero. Also, investigate the case in which (xM + yN) 0.

    4. Solve the equations

    (a) (x4 + y4) dx xy3 dy = 0(b) y2 dx + (x2 xy y2) dy = 0

    5. Find the general solution for the differential equation with independent variable x.

    (a) y + 2y 8y = 0(b) y + 3y + 28y + 26y = 0

    (c) y 9y + 27y 27y = 0(d) y(4) + 4y + 4y = 0

    (e) (D 1)2(D + 3)(D2 + 2D + 5)2y = 06. Solve the initial value problem

    y y 4y + 4y = 0, y(0) = 4, y(0) = 1, y(0) = 19.

    39

  • 8/3/2019 5310_Ch1_fall00

    40/40

    7. Find the general solution

    (a) y + 2y + 10y = 25x2 + 3.

    (b) y + y 6y = 6x3 + 3x2 + 6x.(c) y + 10y + 25y = e5x.

    (d) y + 1.5y y = 12x2 + 6x3 x4 with y(0) = 4 and y(0) = 8.(e) y + 2y + y = ex cos(x).

    (f) y 2y + y = ex/x3.(g) (x2D2 2xD + 2)y = x3 cos(x).(h) (x2D2 + xD 9)y = 48x5.

    8. Find the general solution.

    a) y + y = sin x sin2x

    b) y

    4y

    + 4y = ex

    + e2x

    c) y + 9y = sec 3x

    d) y 2y + y = ex

    x3