Mathematics Introduction for MSc
Transcript of Mathematics Introduction for MSc
-
8/21/2019 Mathematics Introduction for MSc
1/52
Introductory Mathematics
Lecture Notes
Deqing Huang & Martin ObligadoDepartment of Aeronautics
Imperial College London
8th October 2014
-
8/21/2019 Mathematics Introduction for MSc
2/52
Basic Information
Course: Introductory Mathematics
Degree: MSc in Advanced Computational Methods for Aeronautics, Flow
Management and Fluid-Structure Interaction
Lecturer: Deqing Huang & Martin Obligado
E-mail: [email protected]
E-mail: [email protected] Textbook: Mathematical Tools for Physics
Author: James Nearing
Course Content
Lecture Topic Lecturer
1 Function Expansions & Transforms D. Huang
2-3 Vector Spaces, Vector Fields & Operators D. Huang
4-5 Linear Algebra, Matrices, Eigenvectors D. Huang
6-7 Vector Calculus – Integral Theorems D. Huang & M. Obligado
8-9-10 Partial Differential Equations M. Obligado
1
mailto:[email protected]:[email protected]:[email protected]:[email protected]
-
8/21/2019 Mathematics Introduction for MSc
3/52
Introductory Mathematics
0-What Is Mathematics?
Different schools of thought, particularly in philosophy, have put forth radically different
definitions of mathematics. All are controversial and there is no consensus.
Survey of leading definitions
(1) Aristotle defined mathematics as: The science of quantity. In Aristotle’s classific-
ation of the sciences, discrete quantities were studied by arithmetic, continuous quantitiesby geometry.
(2) Auguste Comte’s definition tried to explain the role of mathematics in coordin-
ating phenomena in all other fields: The science of indirect measurement , 1851. The
“indirectness” in Comte’s definition refers to determining quantities that cannot be meas-
ured directly, such as the distance to planets or the size of atoms, by means of their
relations to quantities that can be measured directly.
(3) Benjamin Peirce: Mathematics is the science that draws necessary conclusions,
1870.
(4) Bertrand Russell: All Mathematics is Symbolic Logic, 1903.
(5) Walter Warwick Sawyer: Mathematics is the classification and study of all pos-
sible patterns, 1955.Most contemporary reference works define mathematics mainly by summarizing its
main topics and methods:
(6) Oxford English Dictionary: The abstract science which investigates deductively
the conclusions implicit in the elementary conceptions of spatial and numerical relations,
and which includes as its main divisions geometry, arithmetic, and algebra, 1933.
(7) American Heritage Dictionary: The study of the measurement, properties, and
relationships of quantities and sets, using numbers and symbols, 2000.
Playful, metaphorical, and poetic definitions
(1) Bertrand Russell: The subject in which we never know what we are talking about,
nor whether what we are saying is true, 1901.(2) Charles Darwin: A mathematician is a blind man in a dark room looking for a
black cat which isn’t there.
(3) G. H. Hardy: A mathematician, like a painter or poet, is a maker of patterns.
If his patterns are more permanent than theirs, it is because they are made with ideas,
1940.
1
-
8/21/2019 Mathematics Introduction for MSc
4/52
1-Field of Mathematics
Mathematics can, broadly speaking, be subdivided into the study of quantity, structure,
space, and change (i.e. arithmetic, algebra, geometry, and analysis). In addition to these
main concerns, there are also subdivisions dedicated to exploring links from the heart
of mathematics to other fields: to logic, to set theory (foundations), to the empirical
mathematics of the various sciences (applied mathematics), and more recently to the
rigorous study of uncertainty.
When I was a undergraduate, I knew that the majors in our mathematical department
include: Pure mathematics, applied mathematics, statistics, and computational mathem-
atics.
When I was a master student, I knew that the directions in pure mathematics in-
cludes: topology, algebra, number theory, differential equations and dynamic systems,
differential geometry, and functional analysis.
2-Mathematical awards
Arguably the most prestigious award in mathematics is the Fields Medal, established
in 1936 and now awarded every four years. The Fields Medal is often considered a
mathematical equivalent to the Nobel Prize.
The Wolf Prize in Mathematics, instituted in 1978, recognizes lifetime achievement,
and another major international award, the Abel Prize, was introduced in 2003. The
Chern Medal was introduced in 2010 to recognize lifetime achievement. These accolades
are awarded in recognition of a particular body of work, which may be innovational, or
provide a solution to an outstanding problem in an established field.
A famous list of 23 open problems, called Hilbert’s problem, was compiled in 1900
by German mathematician David Hilbert. This list achieved great celebrity among math-
ematicians, and at least nine of the problems have now been solved. A new list of seven
important problems, titled the Millennium Prize Problems, was published in 2000. A
solution to each of these problems carries a $1 million reward, and only one (the Riemannhypothesis) is duplicated in Hilbert’s problems.
3-Mathematics in aeronautics
Mathematics in aeronautics includes calculus, differential equations, and linear algebra,
etc.
4-Calculus1
Calculus has been an integral part of man’s intellectual training and heritage for the lasttwenty-five hundred years. Calculus is the mathematical study of change, in the same
way that geometry is the study of shape and algebra is the study of operations and their
application to solving equations. It has two major branches, differential calculus (con-
cerning rates of change and slopes of curves), and integral calculus (concerning accu-
mulation of quantities and the areas under and between curves); these two branches are
related to each other by the fundamental theorem of calculus. Both branches make use
1Extracted from: Boyer, Carl Benjamin. The history of the calculus and its conceptual development.
Courier Dover Publications, 1949.
2
-
8/21/2019 Mathematics Introduction for MSc
5/52
of the fundamental notions of convergence of infinite sequences and infinite series to a
well-defined limit. Generally, modern calculus is considered to have been developed in
the 17th century by Isaac Newton and Gottfried Leibniz, today calculus has widespread
uses in science, engineering and economics and can solve many problems that algebra
alone cannot.
Differential and integral calculus is one of the great achievements of the humand
mind. The fundamental definitions of the calculus, those of the derivative and the integ-
ral, are now so clearly stated in textbooks on the subject, and the operations involving
them are so readily mastered, that it is easy to forget the difficulty with which these ba-
sic concepts have been developed. Frequently a clear and adequate understanding of the
fundamental notions underlying a branch of knowledge has beeen achieved comparately
late in its development. This has never been more aptly demonstrated than in the rise
of the calculus. The precision of statement an the facility of application which the rules
of the calculus early afforded werein a measure responsible for the fact that mathem-
aticians were insensible to the delicate subtleties required in the logical development of
the discipline. They sought to establish the calculus in terms of the conceptions found
in the traditional geometry and algebra which had been developed from spatial intuition.
During the eighteenth century, however, the inherent difficulty of formulating the under-
lying concepts became increasingly evident ,and it then became customary to speak of the “metaphysics of the caluclus”, thus implying the inadequacy of mathematics to give a
satisfactory expositionof the bases. With the clarification of the basic notions –which, in
the nineteenth century, was given in terms of precise mathematical terminology– a safe
course was steered between the intuition of the concrete in nature (which may lurk in
geometry and algebra) and the mysticism of imaginative speculation (which may thrive
on trascendental metaphysics). The derivative has throughout its development been thus
precariously situated between the scientific phenomenon of velocity and the philosoph-
ical noumenon of motion.
The history of integral is similar. On the one hand, it had offered ample opportunity
for interpretations by positivistic thought in terms either of approximations or of the
compensation of errors, views based on the admitted approximative nature of scientificmeasurements and on the accepted doctrine of superimposed effects. On the other hand,
it has at the same time been regarded by idealistic metaphysics as a manifestation that
beyond the finitism of sensory percipiency there is a trascendent infinite which can be but
asymptotically approached by human experience and reason. Only the precision of their
mathematical definition –the work of the nineteeth century– enables the derivative and the
integral to mantain their autonomous positionas abstract concepts, perhaps derived from,
but nevetheless independent of, both physical description and metaphysical explanation.
3
-
8/21/2019 Mathematics Introduction for MSc
6/52
Function Expansions & Transforms
Many important differential equations appearing in practice cannot be solved in terms of
these functions. Solutions can often only be expressed as infinite series, that is, infinite
sums of simpler functions such as polynomials, or trig functions.
We must therefore give meaning to an infinite sum of constants, using this to give
meaning to an infinite sum of functions. When the functions being added are the simple
powers (x − x0)k, the sum is called a Taylor (power) series and if x0 = 0 a Maclaurinseries. When the functions are trig terms such as sin(kx) or cos(kx), the series mightbe a Fourier series, certain infinite sums of trig functions that can be made to represent
arbitrary functions, even functions with discontinuities. This type of infinite series is alsogeneralized to sums of other functions such as Legendre polynomials. Eventually, solu-
tions of differential equations will be given in terms of infinite sums of Bessel functions,
themselves infinite series.
0-Infinite Series
If {ak}, k = 0, 1, · · · is a sequence of numbers, the ordered sum of all its terms, namely,∞k=0 ak = a0 + a1 + · · · , is called an infinite series.
(1) Partial sumsnk=0 ak.(2) Sum of the infinite series: the limit of the sequence of its partial sums.
(3) Geometric series (∞k=0 ar
k); the p-series (∞k=1 1/k
p).
(4) Series with positive terms; Series with both positive and negative terms, etc.
Methods to prove convergence:
(1) The nth-term rule:∞n=0 an < ∞ ⇒ limn→∞ an = 0.
(2) The ratio test: For
∞k=0 ak, a series with positive terms, let ρ = limk→∞(ak+1/ak).
If ρ < 1, the series converges; if ρ > 1, the series diverges; if ρ = 1, the test is
inconclusive.
(3) The nth root test: For∞n=0 an, a series with nonnegative terms, let r = limn→∞(an)
1/n.
If r < 1, the series converges; if r > 1, the series diverges; if ρ = 1, the test isinconclusive.
(4) An absolutely convergent series is convergent.
(5) Leibniz Theorem: For alternating series∞k=0(−1)kak, ak > 0, if ak ≥ ak+1 and
ak → 0, then the series converges.4
-
8/21/2019 Mathematics Introduction for MSc
7/52
It can be shown that if ρ = limn→∞(an+1/an) exists, then r = limn→∞(an)1/n also
exists, and ρ = r. However, r may exist when ρ does not, so the nth root test is morepowerful.
1-Taylor Series
Taylor polynomial approximation
f (x) = pn(x) + 1
n!
ˆ xa
(x − t)nf (n+1)(t)dt (1)
where the nth-degree Taylor polynomial pn(x) is given by
pn(x) = f (a) + f
(a)
1! (x − a) + · · · + f
(n)(a)
n! (x − a)n
When a = 0, the series is also called a Maclaurin series.Conditions: 1. f (x), f (1)(x), · · · , f (n+1)(x) are continuous in a closed interval con-
taining x = a. 2. x is any point in the interval.A Taylor series represents a function for a given value as an infinite sum of terms
that are calculated from the values of the function’s derivatives. The Taylor series of a
function f (x) for a value a is the power series, if is meaningful,
f (x) =∞k=0
f (k)(a)
k! (x − a)k.
2-Fourier Series
A Fourier series decomposes periodic functions into a sum of sines and cosines (tri-
gonometric terms or complex exponentials). For a periodic function f (x), periodic on[−L, L], its Fourier series representation is
f (x) = 0.5a0 +∞n=1
an cos
nπxL
+ bn sin
nπxL
(2)
and the Fourier coefficients an and bn are given by
an = 1
L
ˆ L−L
f (x)cosnπx
L
dx bn =
1
L
ˆ L−L
f (x)sinnπx
L
dx (3)
If f (x) is an odd function then an = 0 and if f (x) is an even function then bn = 0.
Condition: f (x) is piecewise continuous on the closed interval [−L, L]. A functionis said to be piecewise continuous on the closed interval [a, b] provided that it is con-tinuous there, with at most a finite number of exceptions where, at worst, we would find
a removable or jump discontinuity. At both a removable and a jump discontinuity, the
one-sided limits f (t+) = limx→t+ f (x) and f (t−) = limx→t− f (x) exist and are finite.
A sum of continuous and periodic functions converges pointwise to a possibly dis-
continuous and nonperiodic function. This was a startling realization for mathematicians
of the early nineteenth century.
5
-
8/21/2019 Mathematics Introduction for MSc
8/52
f (x) can also be expressed as a complex Fourier series
f (x) =+∞−∞
cneinπx/L (4)
with cn = 0.5(an − ibn) for n > 0, cn = 0.5(a−n + ib−n) for n
-
8/21/2019 Mathematics Introduction for MSc
9/52
Example 1: The Fourier Transform
The Fourier transform of a function f (x) is defined as
F (m) =
ˆ +∞−∞
f (x)e−imxdx (11)
and its inverse formula is
f (x) = 1
2π
ˆ +∞−∞
F (m)eimxdm (12)
In this example, x1 = m1 = −∞, x2 = m2 = +∞ and K (m, x) = e−imxNote: The Fourier transform is an extension of the Fourier series when the period of the
represented function is increased and approaches infinity. It works for any non-periodic
function.
Example 2: The Laplace Transform
The Laplace transform is an example of an integral transform that will convert a differ-
ential equation into an algebraic equation. There are four operational rules that relate the
transform of derivatives and integrals with multiplication and division.
F (s) = L[f (t)] =
ˆ ∞0
f (t)e−stdt
Conditions: if f (t) is piecewise continuous on [0, ∞), and of exponential order(|f (t)| ≤ Keαt for some K and α > 0), then F (s) exists for s > α.
4-Galerkin Expansion
For function
∂ tω = Lω + N (ω), LΦ j = λ jΦ j ,
assume that
ω =
a j(t)Φ j(x),
the system can be converted to
da jdt
= λ ja j + F j(al).
Application in rotating Couette flow: Double periodic boundary
conditions
The particular flow we select is a version of the famous rotating Couette flow between
two co-axial cylinders. The gap between the cylinders is assumed to be much smaller
than the cylinder radius. A local Cartesian coordinate system x = (x,y ,z)T is oriented
such that the axis of rotation is parallel to the z axis, while the circumferential directioncorresponds to the x axis. Only flows independent of x are considered. The flow velocity
7
-
8/21/2019 Mathematics Introduction for MSc
10/52
Figure 1: The rotating Couette flow
is represented as (y + u,v, w)T , so that u ≡ (u,v,w)T is the velocity perturbation andū = (y, 0, 0)T is the equilibrium flow. Under these assumptions, the governing equations
are
∂ u
∂t + u · ∇T u = −∇ p + 1
Re∇2u + Au, ∇ · u = 0, (13)
where ∇ = (∂/∂x,∂/∂y,∂/∂z), and
A =
0 Ω − 1 0
−Ω 0 00 0 0
.
For simplicity, the flow is assumed to be 2π-periodic in y and z, u and v are assumed tobe odd in y and even in z, while w is assumed odd in z and even in y :
u(y, z) = u(y + 2π, z) = u(y, z + 2π), p(y, z) = p(y + 2π, z) = p(y, z + 2π),u(y, z) = −u(−y, z) = u(y, −z), v(y, z) = −v(−y, z) = v(y, −z),
w(y, z) = w(−y, z) = −w(y, −z).(14)
8
-
8/21/2019 Mathematics Introduction for MSc
11/52
Preliminary analysis of the stability properties of the flow
Our interest lies in the global stability of this flow and its asymptotic convergence.
We first apply the well-known energy stability approach. Setting the Lyapunov func-
tional as the perturbation energy E = u2/2 leads to a linear eigenvalue problem. For(13)-(14), the resulting eigenfunctions en,m(x) can easily be found:
en,m(x) =
cos(mz)sin(ny)√ 2π
, m cos(mz)sin(ny)√ 2π
√ m2 + n2
, −n sin(mz) cos(ny)√ 2π
√ m2 + n2
T , (15)
where n = 1, 2, · · · , m = 0, ±1, ±2, · · · . The corresponding eigenvalues are
λn,m(Re) = −m
2
m2 + n2− m
2 + n2
Re . (16)
Note that for this flow, conveniently, neither the eigenfunctions nor the eigenvectors
depend on Ω. The eigenvalue λ1,−1 is positive for Re > 4√
2, with all other eigenvaluesless than λ1,−1. Hence, the energy stability limit is Re = ReE = 4
√ 2. One can show
that the flow becomes linearly unstable for 0 < Ω < 1 and
Re > ReL=
2√ 2√ 1 − Ω√ Ω . (17)
Note that ReL = ReE for Ω = 1/2. Moreover, for Ω = 0 and Ω = 1, it can be proventhat this flow is globally stable for any Re. For other values of Ω, we solved (13)-(14)numerically for a variety of initial conditions and observed convergence to the base flow
for all Re < ReL. This, of course, does not eliminate all possible initial conditions, andit does not eliminate existence of unstable solutions not tending to the base flow with
time. Hence, rigorously establishing global stability in the range ReE < Re < ReL isof interest.
Next, with the aid of SOS optimization, we analyze the global stability of the periodic
rotating Couette flow (13-14). To this end, we first reduce (13)-(14) to an uncertain
dynamical system.
Finite-dimensional uncertain system
We represent the perturbation velocity as
u(x, t) =ki=1
ai(t)ui(x) + us(x, t), (18)
where the finite Galerkin basis fields ui, i = 1, · · · , k are an orthonormal set of solen-oidal vector fields with appropriate inner product, the residual perturbation velocity usis solenoidal and orthogonal to all the ui, and both ui and us satisfy the boundary con-
dition (14) of the Couette flow. We substitute (18) into the flow equation (13), and take
inner product with each of the Galerkin basis fields ui for i = 1, · · · , k. After somestraightforward manipulation this yields
da
dt = f (a) + Θa(us) + Θb(us, a) + Θc(us) (19)
9
-
8/21/2019 Mathematics Introduction for MSc
12/52
where a = (a1, · · · , ak)T , and the components of f ,Θa,Θb, and Θc are
f i(a)
= Lija j + N ijka jak, (20)Θai(us)
= us,gi, (21)
Θbi(us, a)
= us,hija j, (22)Θci(us) = us,us · ∇ui. (23)
Einstein summation notation (summation over repeated indices) is used in the above
equations. The inner product w1,w2 is the integral of w1 · w2 over the flow domainV = {(y, z)|0 ≤ y ≤ 2π, 0 ≤ z ≤ 2π}. The second-order tensor L and the third-ordertensor N are defined component-wise as
Lij=
1
Reui, ∇2u j + ui, Au j, (24)
N ijk= −ui,u j · ∇uk. (25)
The vector fields gi and hij are defined as
gi=
1
Re∇2ui + ū · ∇T ui − ui · ∇T ū, (26)
hij= u j · ∇ui − ui · ∇T u j . (27)
where ū is the steady flow, the stability of which is studied. For the periodic Couetteflow, ū = (y, 0, 0)T . The notation used can be clarified by the Einstein equivalent of
(26): gmi = 1Re∇2umi + ūk
∂ ̄umi∂xk
− ∂ ̄uk∂xm uki , where gmi , umi , xm are the m-th componentsof the vectors gi,ui and x, respectively.
The equation (19) represents the evolution of the parameters a of the Galerkin ex-
pansion, where the residual us appears but is unknown. Instead of considering in full
the dynamics of the remaining unmodelled modes us, which is itself described by a sys-
tem of partial differential equations, we will find bounds on the effect of us on a. The
evolution of q 2 = us2/2 satisfies the following differential equation:˙(q 2) = a · f (a) − a · ȧ + Γ(us) + χ(us, a) (28)
= −a ·Θ(us,a) + Γ(us) + χ(us, a),
where
Θ(us, a)
= Θa(us) + Θb(us, a) + Θc(us), (29)
Γ(us)
= 1
Re us,
∇2us
− us,us
· ∇aru + aru
· ∇us
, (30)
χ(us, a)
= 2us,d ja j, d j = 1Re
∇2u j − (u j · ∇aru + aru · ∇u j). (31)
Note that the terms a · f (a) and Γ(us) in (28) represent the self-contained dissipationor generation of energy depending on aiui and us, while the term χ(us, a) denotes thegeneration or dissipation of energy arising from the interaction of these velocity fields.
Overall, the periodic rotating Couette flow under consideration can be described by
10
-
8/21/2019 Mathematics Introduction for MSc
13/52
the system
ȧ = f (a) + Θ(us, a), (32)
˙(q 2) = −a ·Θ(us, a) + Γ(us) + χ(us,a). (33)
In (32) and (33), an important fact is that the evolution of the dynamical system depends
on us via the perturbation terms Θ(us, a), Γ(us), and χ(us, a). This means that for agiven q 2 > 0, there exist many us satisfying us2/2 = q 2; producing different right-hand sides of (32) and (33). In this sense, (32–33) is an uncertain dynamical system. The
solution of this system is therefore not unique. However, if all the solutions of (32–33)
tend to zero as time tends to infinity, then the solution of the Navier-Stokes system also
tends to zero.
Application in rotating Couette flow: periodic boundary condi-
tion in z and non-slip boundary condition in y
The flow is assumed to be evolved inside the domain V := {(y, z) | − π ≤ y ≤ π}satisfying non-slip boundary conditions along ∂ V , namely, u(±π, z) = 0. Further, theflow is 2π-periodic in z to achieve maximum simplification.
First consider the energy stability of the flow. The linear stability of the flow can be
analyzed similarly.
Energy stability of the flow
Setting the Lyapunov functional as the perturbation energy function E = u2/2 leadsto a linear eigenvalue problem, i.e.,
−λu =
− 1Re∇2 12 01
2 − 1
Re∇2 0
0 0 − 1Re∇2u +
0
∂ζ
∂y∂ζ ∂z
(34)∇ · u = 0, u(±π, z) = 0, (35)
where ζ is the Lagrange multiplier for the incompressibility condition, and λ is the Lag-range multiplier for the unit norm condition u = 1.
Considering the 2π-periodic property of the flow in z, without loss of generality, weassume that the energy eigenfunctions u and the Lagrange multiplier ζ take the followingform
u =∞
m=−∞
ûm(y) cos(mz),
v =∞
m=−∞
v̂m(y) cos(mz), (36)
w =∞
m=−∞
ŵm(y)sin(mz),
ζ =∞
m=−∞
ζ̂ m(y) cos(mz).
11
-
8/21/2019 Mathematics Introduction for MSc
14/52
Substituting (36) into (34) and (35) renders to
(i) m = 0.
(D2 − m2 − Reλ)2(D2 − m2)v̂m = −m2Re2
4 v̂m, (37)
v̂m = Dv̂m = (D2
−m2
−Reλ)(D2
−m2)v̂m = 0, y =
±π, (38)
and
ŵm = − 1m
Dv̂, ûm = − 2m2Re
(D2 − m2)(D2 − m2 − Reλ)v̂m,
where D is the differential operator d/dy.(ii) m = 0.
(D2 − Reλ)û0 = 0, û0(±π) = 0, v̂0 = ŵ0 = 0. (39)
For the latter case, it is easy to derive that
û0 = en,0=
1√
2πcos
2n − 1
2 y
, 0, 0
T , λ = −(2n − 1)
2
4Re ,
where n = 1, 2, · · · is the mode number. For the former case, solving the energy eigen-value problem is equivalent to solving the 6th-order ODE (37) subject to the boundaryconditions (38), which however is hard to solve analytically.
Notice that the onset of energy instability in Reynolds number, denoted by ReE , isdetermined by (37)-(38) with λ = 0, i.e.,
(D2 − m2)3v̂m = −m2Re2E
4 v̂m, (40)
v̂m = Dv̂m = (D2 − m2)2v̂m = 0, y = ±π. (41)
In the following, the exact solution of the problem (40)-(41) is exploited. The even and
odd solutions of (40)-(41) can be written in the forms
v̂m,e =3i=1
Ai cosh(q iy) (42)
and
v̂m,o =
3
i=1
Bi sinh(q iy), (43)
respectively, where q i are the roots of the equation
(q 2 − m2)3 = −m2Re2E
4 . (44)
The higher modes can of course be obtained from these solutions, but here our interests
only lie in the first even and odd modes of system instability. If, in place of ReE , we
12
-
8/21/2019 Mathematics Introduction for MSc
15/52
introduce the quantity r that satisfies the relationship
m2Re2E 4
= m6r3, (45)
then the roots of (44) can be written down explicitly in the form
q 1 = im(r − 1)12
, q 2 = m(A − iB), q 3 = m(A + iB), (46)where the quantities A and B satisfy
2A2 = (1 + r + r2)12 + (1 +
1
2r),
2B2 = (1 + r + r2)12 − (1 + 1
2r).
Taking these relations into account, and setting A1 = 1, A2 = (C 1 + iC 2)/2, A3 =(C 1 − iC 2)/2, B1 = −i, B2 = (S 1 + iS 2)/2, B3 = (S 1 − iS 2)/2 with uncertain realconstants C i, S i, i = 1, 2, the solutions (42) and (43) can be written more explicitly as
v̂m,e = cos(m(r − 1)12y) + C 1 cosh(mAy)cos(mBy)
+C 2 sinh(mAy)sin(mBy) (47)
and
v̂m,o = sin(m(r − 1) 12y) + S 1 sinh(mAy)cos(mBy)+S 2 cosh(mAy)sin(mBy). (48)
Since the boundary conditions (41) are homogeneous, ReE can be obtained by solv-ing a characteristic value problem accordingly by regarding it as a function of the mode
number m in the z-direction. More clearly, applying the boundary conditions (41) to (47)
or (48) will give three linear homogeneous equations for the constants Ai or Bi. If thoseconstants are not to vanish identically, then the determinant of the system must vanish,
yielding that
−(r − 1) 12 tan(mπ(r − 1) 12 ) (49)
= (A +
√ 3B) sinh(2πmA) + (
√ 3A − B) sin(2πmB)
cosh(2πmA) + cos(2πmB)
for the even mode, and that
(r − 1) 12 cot(mπ(r − 1) 12 ) (50)
= (A +
√ 3B) sinh(2πmA)
−(√
3A−
B) sin(2πmB)
cosh(2πmA) − cos(2πmB)for the odd mode. (49) and (50) are transcendental equations relating m and r, and thuscan only be solved numerically. Then, we can see that the onset of energy instability of
the flow is determined by the first even mode at m = 1, which corresponds to r = 1.3441.Considering the relationship (45), the energy stability limit is Re = ReE , where
ReE := 2m2r
32 = 2 × 1.3441 32 = 3.1166.
Note that, for this flow, conveniently, the energy stability limit is irrelevant to Ω.13
-
8/21/2019 Mathematics Introduction for MSc
16/52
Linear stability of the flow
The linear stability of the flow (13) is the stability of the linearized system of (13), i.e.,
∂ u
∂t = −∇ p + 1
Re∇2u + Au, ∇ · u = 0, (51)
which can be obtained by solving the following linear eigenvalue problem
−λu =
− 1Re∇2 1 − Ω 0Ω − 1Re∇2 00 0 − 1Re∇2
u +
0
∂ζ ∂y
∂ζ ∂z
(52)
∇ · u = 0, u(±π, z) = 0, (53)
where ζ and λ are given as in (34). Similarly as in the preceding part, the problem isequivalent to solving
(D2 − m2 − Reλ)2(D2 − m2)v̂m = −m2Re2(1 − Ω)Ωv̂m, (54)v̂m = Dv̂m = (D
2
− m2
− Reλ)(D2
− m2
)v̂m = 0, y = ±π. (55)Immediately, one can show that the Couette flow becomes linearly unstable for 0 < Ω <1 and
Re > ReL := ReE
2√
1 − Ω√ Ω = 3.1166
2√
1 − Ω√ Ω . (56)
Note that ReL = ReE for Ω = 1/2.
14
-
8/21/2019 Mathematics Introduction for MSc
17/52
Vector Spaces, Vector Fields &
Operators
In the context of physics we are often interested in a quantity or property which varies
in a smooth and continuous way over some one-, two-, or three-dimensional region of
space. This constitutes either a scalar field or a vector field, depending on the nature of
property. In this chapter we consider the relationship between a scalar field involving
a variable potential and a vector field involving ‘field’, where this means force per unit
mass or change. The properties of scalar and vector fields are described and how they lead
to important concepts, such as that of a conservative field, and the important and usefulGauss and Stokes theorems (actually will be presented separately). Finally, examples
may be given to demonstrate the ideas of vector analysis.
There are four types of functions involving scalars and vectors:
• Scalar functions of a scalar, f (x)
• Vector functions of a scalar, r(t)
• Scalar functions of a vector, ϕ(r)
• Vector functions of a vector, A(r)
1- The vector x is normalised if xTx = 12- The vectors x and y are orthogonal if xTy = 03- The vectors x1, x2,..., xn are linearly independent if the only numbers which sat-isfy the equation a1x1 + a2x2 + ... + anxn = 0 are a1 = a2 = ... = an = 0.4- The vectors x1, x2,..., xn form a basis for a n−dimentional vector-space if anyvector x in the vector-space can be written as a linear combination of vectors in the basis
thus x = a1x1 + a2x2 + ... + anxn where a1, a2,..., an are scalars.
0-Scalar (inner) product of vector fields
A,B
= A
·B = AT B = A1B1 + A2B2 + A3B3 (57)
where A = (A1, A2, A3) and B = (B1, B2, B3).
A ·B = AB cos θ,
where θ is the angle between A and B satisfying 0 ≤ θ ≤ π.Product laws:
(1) Commutative: A ·B = B ·A(2) Associative: mA · nB = mnA ·B(3) Distributive: A · (B + C) = A ·B + A ·C
15
-
8/21/2019 Mathematics Introduction for MSc
18/52
(4) Cauchy-Schwarz inequality: A ·B ≤ (A ·A) 12 (B ·B) 12L p normsThere are many norms that could be defined for vectors. One type of norm is called
an L p norm, often denoted as · p. For p ≥ 1, it is defined as
x
p =
n
i=1
xi p
1p
, x = [x1,
· · · , xn]
T . (58)
For instance,
(1) x1 =
i |xi|, also called the Manhattan norm because it corresponds to sumsof distances along coordinate axes, as one would travel along the rectangular street plan
of Manhattan.
(2) x2 =
i x2i , also called the Euclidean norm, the Euclidean length, or just
the length of the vector.
(3) x∞ = maxi xi, also called the max norm or the Chebyshev norm.Some relationships of norms.
x∞ ≤
x2 ≤
x1
,
x∞ ≤ x2 ≤√
nx∞,
x2 ≤ x1 ≤√
nx2Define the inner product induced norm x = x, x. Then,
(x + y)2 ≥ x + y2, x + y2 = x2 + y2 + 2x, y.
1-Vector product of vector fieldsA×B = (A2B3 − A3B2, A1B3 − A3B1, A1B2 − A2B1) (59)
The cross-product of the vectors A and B, is orthogonal to both A and B, forms a right-
handed systems with A and B, and has length given by
A×B = AB sin θ,
where θ is the angle between A and B satisfying 0 ≤ θ ≤ π.Additional properties of the cross-product
(1) Scalar multiplication (aA) × (bB) = ab(A×B),(2) Distributive laws A
×(B + C) = A
×B + A
×C,
(3) Anticommutation B ×A = −A×B(4) Nonassociativity A × (B×C) = (A ·C)B− (A ·B)C
2-Gradient of a scalar field
grad ϕ = ∇ϕ =
∂ϕ
∂x, ∂ϕ
∂y, ∂ ϕ
∂z
(60)
If we consider a surface in 3D space with ϕ(r) = const then the direction normal (i.e.perpendicular) to the surface at the point r is the direction of grad ϕ. The magnitude of
16
-
8/21/2019 Mathematics Introduction for MSc
19/52
the greater rate of change of ϕ(r) is the magnitude of grad ϕ.In physical situations we may have a potential, ϕ, which varies over a particular
region and thus constitutes a field E , satisfying
E = −∇ϕ = −
∂ϕ
∂x, ∂ ϕ
∂y, ∂ ϕ
∂z
Example: As an example we calculate the electric field at point (x,y ,z) due to acharge q 1 at (2, 0, 0) and a charge q 2 at (−2, 0, 0) where charges are in coulombs anddistances in metres. The potential at the point (x,y,z) is
ϕ(x,y ,z) = q 1
4π0{(2 − x)2 + y2 + z2}1/2 + q 2
4π0{(2 + x)2 + y2 + z2}1/2 .
As a result, the components of the field are
E x = − q 1(2 − x)4π0{(2 − x)2 + y2 + z2}3/2 +
q 2(2 + x)
4π0{(2 + x)2 + y2 + z2}3/2
E y = −
q 1y
4π0{(2 − x)2 + y2 + z2}3/2 +
q 2y
4π0{(2 + x)2 + y2 + z2}3/2E z = − q 1z
4π0{(2 − x)2 + y2 + z2}3/2 + q 2z
4π0{(2 + x)2 + y2 + z2}3/2 .
3-Divergence of a vector field
div A = ∇ . A =
∂A1∂x
+ ∂A2
∂y +
∂A3∂z
(61)
The value of the scalar div A at point r gives the rate at which the material is expandingor flowing away from the point r (outward flux per unit volume).
Theorem involving divergence
Divergence theorem that relates a volume integral and a surface integral within a
vector field. This states that ‹ ΓF · dA =
˚ Ω
∇ · FdV , (62)
where Ω represents the overall volume domain and Γ denotes the total surface boundary.
4-Curl of a vector field
curl A
≡ ∇ × A =
∂A3
∂y −
∂A2
∂z
, ∂A1
∂z −
∂ A3
∂x
, ∂A2
∂x −
∂ A2
∂x (63)
where A = (A1, A2, A3). The vector curl A at point r gives the local rotation (orvorticity) of the material at point r. The direction of curl A is the axis of rotation andhalf the magnitude of curl A is the rate of rotation or angular frequency of the rotation.
Theorem involving Curl of vectors
Stokes’s theorem: we consider a surface S that has a closed non-intersecting bound-ary, C , the topology of, say, one half of a tennis ball. Stokes’s theorem states that for a
17
-
8/21/2019 Mathematics Introduction for MSc
20/52
vector field F within which the surface is situated,
˛ C F · dr =
‹ S
(∇ × F) · dA . (64)
6-Repeated operations
Note that grad must operate on a scalar field and gives a vector field in return, divoperates on a vector field and gives a scalar field in return, and curl operates on a vector
field and gives a vector field in return.
div grad ϕ = ∇2 =
∂ϕ
∂ 2x2 +
∂ 2ϕ
∂y2 +
∂ 2ϕ
∂z2
(65)
curl grad ϕ = 0 (66)
div curl A = 0 (67)
curl curl A = grad div A − ∇2 A (68)
∇2 A = ∂ A∂ 2x2
+ ∂ 2
A∂y2
+ ∂ 2
A∂z2
(69)
where ∇2 = ∇ · ∇ is the very important Laplacian operator.Other forms for other coordinate systems for ∇2 are as follows.(1) Spherical polar coordinates:
∇2 = 1r2
∂
∂rr2
∂
∂r +
1
r2 sin θ
∂
∂θ
sin θ
∂
∂θ
+
1
r2 sin2 θ
∂ 2
∂φ2
(2) Two-dimensional polar coordinates:
∇2
=
∂ 2
∂r2 +
1
r
∂
∂r +
1
r2∂ 2
∂θ2
(3) Cylindrical coordinates:
∇2 = ∂ 2
∂r2 +
1
r
∂
∂r +
1
r2∂ 2
∂θ2 +
∂ 2
∂z2
7-Products rules
grad(ϕψ) = ϕ grad ψ + ψ grad ϕ (70)
div (ϕA) = ϕ div A + A . grad ϕ (71)
curl (ϕA) = ϕ curl A + (grad ϕ) × A (72)div (A × B) = B . curl A −A . curl B (73)
18
-
8/21/2019 Mathematics Introduction for MSc
21/52
Linear Algebra, Matrices,
Eigenvectors
In many practical systems there naturally arises a set of quantities that can conveniently
be represented as a certain dimensional array, referred to as a matrix. If matrices were
simply a way of representing arrays of numbers then they would have only a marginal
utility as a means of visualizing data. However, a whole branch of mathematics has
evolved, involving the manipulation of matrices, which has become a powerful tool for
the solution of many problems.
For instance, consider the set of n linear equations with n unknowns
a11Y 1 + a12Y 2 + ... + a1nY n = 0a21Y 1 + a22Y 2 + ... + a2nY n = 0
..........................an1Y 1 + an2Y 2 + ... + annY n = 0
(74)
The necessary and sufficient condition for the set to have a non-trivial solution (other
than Y 1 = Y 2 = ... = Y n = 0) is that the determinant of the array of coefficients is zero:det(A) = 0.
0-Basic definitions and notation
(1) Transpose: AT = (a ji). (A + B)T = AT + BT . A symmetric matrix is equal to
its transpose, A = AT
(2) Diagonal matrices:
diag((d1, d2, · · · , dn)) =
d1 0 · · · 00 d2 · · · 0
. . .
0 0 · · · dn
A1 ⊕ · · · ⊕ Ak = diag(A1, · · · , Ak).
(3) Trace: tr(A) =i aii.
tr(A) = tr(AT ), tr(cA) = ctr(A), tr(A + B) = tr(A) + tr(B).
(4) Determinant: |A| = n j=1 aija(ij), where a(ij) = (−1)i+ j |A(i)(j) | with A(i)(j)denoting the submatrix that is formed from A by removing the ith row and the jthcolumn.
19
-
8/21/2019 Mathematics Introduction for MSc
22/52
|A| = |AT |, |cA| = cn|A|
(5) Adjugate: adj(A) = (a( ji)) = (a(ij))T .
A adj(A) = adj(A)A =
|A
|I.
1-Multiplication of matrices and multiplication of vectors and
matrices
(1) matrix multiplication
associative : A(BC ) = (AB)C,
distributive over addition : A(B + C ) = AB + AC, (B + C )A = BA + CA.
For any positive integer k,
I − Ak = (I − A)(I + A + · · · + Ak−1).
For an odd positive integer k,
I + Ak = (I + A)(I − A + · · · Ak−1).
(2) Traces and determinants of square Cayley products
tr(AB) = tr(BA), tr(ABC ) = tr(BC A) = tr(CAB),
xT Ax = tr(xT Ax) = tr(AxxT ).
|AB| = |A||B|,
A 0−I B
= |A||B|(3) The Kronecker product
A ⊗ B =
a11B · · · a1mB... · · · ...
an1B · · · anmB
|A ⊗ B| = |A|m|B|n, A ∈ Rn×n, B ∈ Rm×m
20
-
8/21/2019 Mathematics Introduction for MSc
23/52
(aA) ⊗ (bB) = ab(A ⊗ B) = (abA) ⊗ B = A ⊗ (abB)
(A + B) ⊗ C = A ⊗ C + B ⊗ C, (A ⊗ B) ⊗ C = A ⊗ (B ⊗ C ),
(A ⊗ B)T = AT ⊗ BT , (A ⊗ B)(C ⊗ D) = AC ⊗ BD.
2-Matrix rank and the inverse of a full rank matrix
The linear dependence or independence of the vectors forming the rows or columns of
a matrix is an important characteristic of the matrix. The maximum number of linearly
independent vectors is called the rank of the matrix, rank(A).
rank(aA) = rank(A), a = 0, rank(A) ≤ min(n, m), A ∈ Rn×m.
If the rank of a matrix is the same as its smaller dimension, we say the matrix is of full
rank . In the case of a nonsquare matrix, we may say the matrix is of full row rank or full
column rank just to emphasize which is the smaller number.
ramk(AB) ≤ min(rank(A),rank(B)),
rank(A + B) ≤ rank(A) + rank(B),
|rank(A) − rank(B)| ≤ rank(A + B).
Linear systems
A linear system An×mx = b, for which a solution exists, is said to be consistent;
otherwise, it is inconsistent. The system is consistent if and only if
rank([A|b]) = rank(A), (75)
namely, the space spanned by the columns of A is the same as that spanned by thecolumns of A and the vector b; therefore, b must be a linear combination of the columnsof A. A special case that yields (75) for any b is
rank(An×m) = n,
and so if A is of full row rank, the system is consistent regardless of the value of b. In thiscase, of course, the number of rows of A must be no greater than the number of columns.
A square system in which A is nonsingular is clearly consistent, and the solution is
x = A−1b.
Preservation of positive definiteness
(1) If C is positive defnite and A is of full column rank, then AT CA is positive definite
(2) If AT CA is positive definite, then A is of full column rank.
A lower bound on the rank of a matrix product21
-
8/21/2019 Mathematics Introduction for MSc
24/52
If A is n × n and B is a matrix with n rows, then
rank(AB) ≥ rank(A) + rank(B) − n.
Inverse of products and sums of matrices
The inverse of the Cayley product of two nonsingular matrices of the same size is
particularly easy to form. If A and B are square full rank matrices of the same size
(AB)−1 = B−1A−1.
A(I + A)−1 = (I + A−1)−1,
(A + BBT )−1B = A−1B(I + BT A−1B)−1,
(A−1 + B−1)−1 = A(A + B)−1B,
A − A(A + B)−1A = B − B(A + B)−1B,
A−1 + B−1 = A−1(A + B)B−1,
(I + AB)−1 = I − A(I + BA)−1B, (I + AB)−1A = A(I + BA)−1,
(A ⊗ B)−1 = A−1 ⊗ B−1.
3-Eigensystems
Definitions
2- The eigenvalues of a symmetric matrix are the numbers λ that satisfy |A − λI | = 03- The eigenvectors of a symmetric matrix are the vectors x that satisfy (A − λI )x = 0
Theorems
1-The eigenvalues of any real symmetric matrix are real.
2-The eigenvectors of any real symmetric matrix corresponding to different eigenvalues
are orthogonal.
22
-
8/21/2019 Mathematics Introduction for MSc
25/52
3-Diagonalisation of symmetric matrices
Definitions
1- An orthogonal matrix U is a real square matrix such that U T U = U U T = 12- If U is a real orthogonal matrix of order n × n and A is a real matrix of the same orderthen U T AU is called the orthogonal transform of A.
3-
Theorem
1- If A is a real symmetric matrix of order n × n then it is possible to find an orthogonalmatrix U of the same order such that the orthogonal transform of A with respect to U isdiagonal and the diagonal elements of the transform are the eigenvalues of A.2-(U T AU )m = U T AmU -Cayley-Hamilton Theorem: A real square symmetrix matrix satisfies its own charac-
teristic equation (i.e. its own eigenvalue equation)
An + an−1An−1 + an−2A
n−2 + ... + a1A + a0I = 0
where
a0 = (−1)n|A|, an−1 = (−1)n−1tr(A)
-Trace Theorem: The sum of the eigenvalues of a matrix A is equal to the sum of thediagonal elements of A and is defined as T r(A).-Determinant Theorem: The product of the eigenvalues of A is equal to the determinantof A.
4-Matrix Factorizations
Matrices can be factored in a variety of ways as a product of matrices with different
properties. These different factorizations, or decompositions, reveal different aspects of
matrix algebra and are useful in different computational arenas.
Similarity transform
Two square matrices A and B are said to be similar if an invertible matrix P can be foundfor which A = P BP −1.
Similarity to a diagonal matrix
Systems of differential equations sometimes can be uncoupled by diagonalizing a
matrix, obtaining the similarity transformation A = P DP −1, where the n columns of
P are the n eigenvectors of A, and D is a diagonal matrix and its entries are the corres-ponding eigenvalues of A.
Similarity to a Jordan canonical form
However, the most general form is A = P J P −1, where J is a Jordan matrix ratherthan a diagonal matrix D. The Jordan matrix is a diagonal matrix with some additional1’s on the superdiagonal, the one above the main diagonal. For some matrices, the Jordan
matrix is as close to diagonalization as can be achieved.
23
-
8/21/2019 Mathematics Introduction for MSc
26/52
A square matrix is similar to either a diagonal matrix or to a Jordan matrix. In either
event, the eigenvalues of A appear on the diagonal of D or J . A square symmetric matrixis orthogonally similar to a diagonal matrix.
LU decomposition
LU decomposition can be obtained as a by-product of Gaussian elimination. The row
reductions that yield the upper triangular factor U also yield the lower triangular factorL. This decomposition is an efficient way to solve systems of the form AX = Y , wherethe vector Y could be one of a number of right-hand sides. In fact, the Doolittle, Crout,and Cholesky variations of the decomposition are important algorithms for the numerical
solutions of systems of linear equations.
There are at least five different versions of LU decompositions.
1. Doolittle, L1U , 1’s on main diagonal of L.2. Crout, LU 1, 1’s on main diagonal of U .3. LDU, L1DU 1, 1’s on main diagonals of L and U and D is a diagonal matrix.4. Gauss, L1DL
T 1 , A is symmetric, 1’s on main diagonal of L, D is a diagonal matrix.
5. Cholesky, RRT , A is symmetric, positive definite, R = L1D, with D a diagonal
matrix.
QR decomposition
The QR decomposition factors a matrix into a product of an orthogonal matrix Q and anupper triangular matrix R. It is an important ingredient of powerful numeric methods forfinding eigenvalues and for solving the least-squares problem.
The factors Q and R for every real matrix are unique once the otherwise arbitrarysigns on the diagonal of R are fixed. Modern computational algorithms for finding ei-genvalues numerically use some version of the QR algorithm. Assume, for A = A0, doQR decomposition iteratively.
A0 = Q0R0, A1 = R0Q0 = Q1R1, , A2 = R1Q1 = Q2R2, A3 = R2Q2 = Q3R3
If A is real and no two eigenvalues have equal magnitude, that is,
0 < |λn| < · · · < |λ2| < |λ1|,
then the sequence of matrices Ak converges to an upper triangular matrix with the eigen-values of A0 on the main diagnal. If, in addition, A is symmetric, then Ak converges toa diagonal matrix.
Singular value decomposition
A = U ΣV T .
The singular value decomposition factors a matrix as a product of three factors, two
being orthogonal matrices and one being diagonal. The columns in one orthogonal factor
are left singular vectors, and the columns in the other orthogonal factor are the right
singular vectors. The matrix itself can be represented in outer product form in terms
of the left and right singular vectors. One use of this representation is in digital image
processing.24
-
8/21/2019 Mathematics Introduction for MSc
27/52
5-Solution of linear systems
These are methods for finding solutions of a set of linear equations which may be written
in the matrix form Ax = b where x are n unknown vectors.
Direct methods
Direct methods of solving linear systems all use some form of matrix factorization. TheLU factorization is the most commonly used method to solve a linear system.
For certain patterned matrices, other direct methods may be more efficient. If a given
matrix initially has a large number of zeros, it is important to preserve the zeros in the
same positions in the matrices that result from operations on the given matrix. This helps
to avoid unnecessary computations. The iterative methods discussed in the next section
are often more useful for sparse matrices.
Another important consideration is how easily an algorithm lends itself to implement-
ation on advanced computer architectures. Many of the algorithms for linear algebra can
be vectorized easily. It is now becoming more important to be able to parallelize the
algorithms.
Iterative methods
The Jacobi method
Let’s start with Ax = b. A can be decomposed into a diagonal component D and theremainder R. The solution is then obtain iteratively by
xk+1 = D−1(b − Rxk)
. Each element is given by
x
k+1
i =
1
aii
bi − j=i aijxk
j
, i = 1, 2,...,n
Comments
1- The method works well is the matrix A is diagonal dominant.2- The matrix must verify aii = 0.
The Gauss-Seidel method
In this method, we identify three matrices: a diagonal matrix D, a lower triangular Lwith 0s on the diagonal, and an upper triangular U with 0s on the diagonal:
(D + L)x = b − U x.
We can write this entire sequence of Gauss-Seidel iterations in terms of these three fixed
matrices:
x(k+1) = (D + L)−1(−U x(k) + b).
The conjugate gradient method
x(k+1) = x(k) + α(k) p(k),
25
-
8/21/2019 Mathematics Introduction for MSc
28/52
where p(k) is a vector giving the direction of the movement.Multigrid methods
Iterative methods have important applications in solving differential equations. The
solution of differential equations by a finite difference discretization involves the form-
ation of a grid. The solution process may begin with a fairly coarse grid on which a
solution is obtained. Then a finer grid is formed, and the solution is interpolated from
the coarser grid to the finer grid to be used as a starting point for a solution over the finer
grid. The process is then continued through finer and finer grids. If all of the coarser grids
are used throughout the process, the technique is a multigrid method. There are many
variations of exactly how to do this. Multigrid methods are useful solution techniques for
differential equations.
26
-
8/21/2019 Mathematics Introduction for MSc
29/52
Generalised Vector Calculus
Integral Theorems
Motivation & Objectives
The objectives of the remaining course units are to:
i. Generalise and formalise integral mathematics
ii. Give incentives for developing mathematical diligence
iii. Provide physical intuition, insight and feeling for the mathematics in CFD
iv. Provide awareness of mathematical properties, characteristics & assumptions
0-Definitions & Notations
The following definitions and notations are used throughout the remainder of the course:
Variable Description Dimensions or Mapping Example
x, α Scalars R1×1 Speed, Temperature
x, x Vectors Rn×1 Velocity, Position
n̂, ê Unit Vectors Rn×1 Boundary Normals
A,X Matrices Ra×b Rotational Operators
ψ(x) , φ(x) Scalar Fields Rn×1 → R1×1 Temperature FieldsF(x) , F (x) Vector Fields Rn×1 → Rm×1 Velocity Fields
S (x) Surfaces R(n−1)×1 → Rn×1 Potential SurfacesAll vectors are assumed to be of column nature, and all vector derivatives are assumed
to obey the numerator layout convention. The dimensional specification 1×1 for the real
scalars, or the additional ×1 for the vectors, is in fact slightly redundant notation, howeverit helps to appreciate the shapes of the vector equations and operators. Additionally,
this careful notation may simplify coding by making array allocation and/or operations
clearly identifiable.
In general, this course assumes a Cartesian coordinate system, and that n = m = 3.However, all the presented concepts can be expressed in any coordinate system of choice
and most of the concepts are readily expanded to higher dimensions.
27
-
8/21/2019 Mathematics Introduction for MSc
30/52
1-Required Vector Operators & Operations
The ∇ operator is, unless explicitly stated otherwise, assumed to be of the shape:
∇ = ∂
∂ xT = ∂
∂ xT
=
∂
∂x∂
∂y
∂
∂z
where: x = x
yz
, (76)
where the superscript T denotes the transpose operation.
Gradient of a Scalar Field
grad(ψ(x)) = ∇(ψ(x)) =
∂
∂x∂
∂y
∂
∂z
(ψ(x)) =
∂ψ(x)
∂x∂ψ(x)
∂y
∂ψ(x)
∂z
∈ R3×1 . (77)
The gradient is normal to level surfaces: let f : R3 → R be a C 1 map and let(x0, y0, z0) lie on the level surface S defined by f (x,y ,z) = k , for k a constant. Then,∇f (x0, y0, z0) is normal to the level surface in the following sense: if v is the tangentvector at t = 0 of a path c(t) in S with c(0) = (x0, y0, z0), then (∇f ) · v = 0.
Laplacian of a Scalar Field
∆(ψ(x)) = ∇2(ψ(x)) = (∇ · ∇) (ψ(x)) =
∂ ∂x∂
∂y
∂
∂z
·
∂ ∂x∂
∂y
∂
∂z
(ψ(x))
=
∂ 2
∂x2 +
∂ 2
∂y2 +
∂ 2
∂z2
(ψ(x))
=
∂ 2ψ(x)
∂x2 +
∂ 2ψ(x)
∂y2 +
∂ 2ψ(x)
∂z2
∈ R1×1 . (78)
The Laplacian is the Divergence of the Gradient of a scalar field. It has importantapplications in Potential Flow Theory.
Laplacian of a Vector Field or Vector Laplacian
Not to be confused with a Laplacian Vector Field . From equation (68) the definition of
the Vector Laplacian follows as:
∇2F = ∇(∇· F) − ∇ × (∇ ×F) , (79)
28
-
8/21/2019 Mathematics Introduction for MSc
31/52
which in Cartesian coordinates reduces to:
∇2F(x) = ∇2F 1(x) ∇2F 2(x) ∇2F 3(x)T ∈ R3×1 , (80)where the vector field F(x) is composed of three scalar components F 1(x), F 2(x) andF 3(x), i.e. F(x) = [F 1(x) F 2(x) F 3(x)]
T. For a scalar field, the Vector Laplacian
reverts to the familiar Laplacian.
Divergence of a Vector Field
div(F(x)) = ∇·F(x) =
∂
∂x∂
∂y
∂
∂z
·
F 1(x)
F 2(x)
F 3(x)
=
∂F 1(x)
∂x +
∂F 2(x)
∂y +
∂F 3(x)
∂z ∈ R1×1.
(81)
Curl of a Vector Field
curl(F(x)) = ∇ ×F(x) =
î ̂j k̂
∂
∂x
∂
∂y
∂
∂z
F 1(x) F 2(x) F 3(x)
∈ R3×1 , (82)
where | · | denotes the determinant.
Primary Theorems of Calculus
The following theorems of calculus will be presented and discussed:
i. Fundamental Theorems of Calculus
ii. Gradient Theorem
iii. Green’s Theorem
iv. Divergence Theorem
v. Stokes’ Theorem
29
-
8/21/2019 Mathematics Introduction for MSc
32/52
First & Second Fundamental Theorem of Calculus
The Riemann Integral is a common and rigorous definition of an integral:
ˆ ba
f (x) dx = lim∆xk→0
k
f (k) ∆xk , (83)
which has the geometrical interpretation of the area under the curve, as shown in figure 2.
x
y
x
y
¢xk
f (²k)
lim¢xk!0
f (x)f (x)
Figure 2: Geometrical interpretation of a Riemann Integral
Another formal, but more general definition denotes the integral as the process which
reverses Derivation, hence why sometimes it is referred to as Anti-Derivative.
1st Fundamental Theorem
The First Fundamental Theorem defines the Anti-Derivative as:
f (x) =
ˆ xa
g(t) dt , (84)
given a continuous real-valued function g(t) over the closed interval domain [a, b]. Itfollows from this theorem that f (x) is continuous over the closed interval domain [a, b],differentiable over the open domain (a, b), and by definition:
g(x) = df (x)
dx . (85)
The First Fundamental Theorem relates the Derivative to the Integral and, most import-
antly, guarantees existence of integrals for continuous functions.
2nd Fundamental Theorem
The Second Fundamental Theorem defines definite integrals as:ˆ ba
g(x) dx = f (b) − f (a) , (86)
for real-valued functions g(x) and f (x) on [a, b] related by equation (85). This theorem,unlike the First Fundamental Theorem, does not require f (x) to be continuous.
30
-
8/21/2019 Mathematics Introduction for MSc
33/52
Generalised Line Integrals & Gradient Theorem
The Gradient Theorem is also referred to as the Fundamental Theorem of Calculus for
Line Integrals. It represents the generalisation of an integration along an axis, e.g. dxor dy, so the 2nd Fundamental Theorem of Calculus, to the integration of vector fieldsalong arbitrary curves, C, in their base space.
Scalar Field Line Integral
Assume a surface z = f (x, y), see figure 3, which is to be integrated from A to B alongthe curve C. Geometrically, this corresponds to the curtain surface, S s, bound by thesurface f (x, y) and C, which is expressed as:
S s =ˆ C
f (x, y) ds where ds =
dx2 + dy2 12 . (87)
Assume C is parametrised such that x = x(t) and y = y(t), with t = a at A andt = b at B, then equation (87) can be expressed as:
ˆ C
f (x(t) , y(t))
dx2 + dy212 =
ˆ b
af (x(t) , y(t))
dx
dt
2
+
dy
dt
2 12
dt , (88)
or alternatively, if y can be expressed as y = y(x), then it follows:
ˆ C
f (x, y) ds =
ˆ xBxA
f (x, y(x))
dy
dx
2+ 1
12
dx . (89)
If the integral is only evaluated either along dx or dy, then only the axis projectionsurfaces are obtained:
S x = ˆ C
f (x, y) dx or S y = ˆ C
f (x, y) dy . (90)
xA
xB
yA
yBC
B
x
y
A
ds
S x
S s
S y
S x
Figure 3: Generalised line integral on a scalar field
31
-
8/21/2019 Mathematics Introduction for MSc
34/52
Example: Scalar Field Line Integral
Assume´ C f (x, y) ds, where f (x, y) = h is constant, and C is a circle of radius R.
Parametrise C as x(θ) = R cos(θ) and y(θ) = R sin(θ) for θ = 0 → 2π. It follows that:
S s =ˆ C
h ds
=ˆ C
h
dx2 + dy212
=
ˆ 2π0
h
dx
dθ
2+
dy
dθ
2 12dθ
=
ˆ 2π0
hR
(− sin(θ))2 + (cos(θ))2 12
dθ
=
ˆ 2π0
hR dθ = 2πhR . (91)
Figure 4: Cylinder surface integral
Note that due to the closed loop integration path, the axis projection S x and S y arenil in this case.
S x =ˆ C
h dx
=
ˆ xBxA
h dx +
ˆ xAxB
h dx
= 0 . (92)
32
-
8/21/2019 Mathematics Introduction for MSc
35/52
Generalised Gradient Theorem
Postulate the integral of a 3D vector field, F(x) = [F 1(x) F 2(x) F 3(x)]T, along an ar-
bitrary 3D curve, C, which is parametrised as x = x(t), y = y(t), z = z(t), i.e. ds =[dx dy dz]T, and goes from point p to point q. The corresponding generalised line in-tegral becomes:
ˆ CF · ds = ˆ
C(F 1(x) dx + F 2(x) dy + F 3(x) dz)
=
ˆ t=bt=a
F 1(x(t))
dx
dt + F 2(x(t))
dy
dt + F 3(x(t))
dz
dt
dt . (93)
Equation (93) may for instance represent the work performed on a particle in 3D as
it travels through an external force field, F(x). Postulating that this vector field, F(x),can be obtained as the gradient of a scalar field, ψ(x), i.e. F(x) = ∇ψ(x), it followstogether with the 2nd Fundamental Theorem of Calculus, that:
ˆ C
F
·ds = ˆ
C ∇ψ(x)
·ds
= ψ(p) − ψ(q) . (94)
Equation (94) is known as the Gradient Theorem and implies path independence of
the integral if and only if F(x) = ∇ψ(x). It immediately follows from equation (66),that such a vector field F(x) is irrotational, i.e. curl(F) = ∇ ×F = 0, because:
∇ ×F = ∇ × (∇ψ(x)) = 0 , (95)for any scalar field ψ(x). The scalar field ψ is referred to as a conservative or potential field with the corresponding vector field F being denoted as a conservative vector field .
Conversely, it is always possible to express a conservative vector field F in terms of a
scalar potential field. This theorem is at the basis of a lot of the Potential Flow and Irrotaional fluid dynamics.
33
-
8/21/2019 Mathematics Introduction for MSc
36/52
Green’s Theorem
For a 2D convex region Ω, i.e. x = [x y]T with boundary Γ, Green’s Theorem states:
¨ Ω
∂F 2(x)
∂x − ∂ F 1(x)
∂y
dx dy =
˛ Γ
(F 1(x) dx + F 2(x) dy) (96)
(a) Convex domain
(b) Non-convex domain
Figure 5: Green’s Theorem on convex & non-convex domain
Considering figure 5 (a), the F 1(x) integrand part of equation (96) can be resolvedas:
¨ Ω
−∂F 1(x)∂y
dx dy =
ˆ x=bx=a
ˆ y=v(x)y=u(x)
−∂F 1(x)∂y
dy
dx
=
ˆ x=bx=a
−F 1(x, v(x)) dx +ˆ x=bx=a
F 1(x, u(x)) dx
=
˛ Γ
F 1(x, y) dx , (97)
where the integration direction always follows a right-hand rotation about the domain
normal, i.e. k̂. Similarly, the F 2(x) integrand part of equation (96) can be resolved. If the region Ω is not convex, it can always be subdivided into sub-domains, Ωi, where theline integrals at the internal boundary between sub-domain i and j , Γij , cancel due toopposite directions of integration, see figure 5 (b), i.e.
´ Γji
= − ´ −Γij
.
Green’s Theorem gives the necessary and sufficient condition for a line integral¸ Γ (F 1(x) dx + F 2(x) dy) to be path independent , in a simply connected region, as:
∂F 2(x)
∂x − ∂F 1(x)
∂y = 0 or
∂F 2(x)
∂x =
∂F 1(x)
∂y . (98)
34
-
8/21/2019 Mathematics Introduction for MSc
37/52
This is equivalent to a nil curl of the vector field F(x) = [F 1(x, y) F 2(x, y) 0]T,
implying that the vector field F(x) is an irrotational field in the x-y plane, i.e.:
∇ ×F(x) =
0 0 ∂F 2(x)
∂x − ∂F 1(x)
∂y
T= 0T . (99)
Coupled with complex analysis, keyhole integration for domains with singularities,
and complex integrals, Green’s Theorem can be used for developing Laurent Series,Cauchy Residues or Laplace Transforms. These methods have important applications
in system dynamics and stability analyses.
35
-
8/21/2019 Mathematics Introduction for MSc
38/52
Divergence Theorem
The Divergence Theorem, also referred to as Gauss’ or Ostrogradsky’s Theorem, relates
the vector flux through a domain boundary, Γ, to the vector field within the domain, Ω.
Theory & Derivation
This theorem can be obtained in several ways, though only two are presented below.
Bottom-Up or Surface to Volume Derivation
A definition of the divergence of a vector field can be physically interpreted as the time
evolution in an infinitesimal volume, V , of substance in a velocity field, F, see pg. 217of the recommended textbook.
div(F) = ∇(F) = limV →0
1
V
dV
dt . (100)
Consider one of the infinitesimal cells i or j in figure 6. The variational change involume of either cell, during the time increment ∆t, is given by the volume swept out byits displaced surface boundary, i.e.:
∆V i =
‹ Γi
∆t (F · n̂) dA , (101)
which in limt→0 results in:dV idt
=
‹ Γi
F · dA . (102)
When combining both cells i or j, the integrals at the shared boundary surface cancel,due to opposing signs in the outwards normal vector dA, resulting in:
‹ ΓF · dA =
i
‹ ΓiF · dA
=i
1
∆V i
‹ Γi
F · dA
∆V i . (103)
Substituting equation (102), taking limV i→0, and using equation (100), gives the Di-vergence Theorem as: ‹
ΓF · dA =
˚ Ω
∇ · (F) dV , (104)
where Ω represents the overall volume domain and Γ denotes the total surface boundary.
Figure 6: Surface boundary normals for adjacent infinitesimal cells
36
-
8/21/2019 Mathematics Introduction for MSc
39/52
Top-Down or Volume to Surface Derivation
Figure 7: Surface boundary for a convex domain
For the convex domain, Ω, in figure 7, consider just the F 3(x) volume integral of the Divergence Theorem, i.e.:
˚ Ω
∂F 3(x)∂z
dV =¨ R
ˆ z=v(x,y)
z=u(x,y)
∂F 3(x,y ,z)∂z
dz
dx dy
=
¨ R
[F 3(x,y,v(x, y)) − F 3(x,y ,u(x, y))]dx dy
=
‹ Γ
F 3(x) k̂ · dA , (105)
because dx dy = k̂ · dA on v(x, y) while dx dy = −k̂ · dA on u(x, y). The remainingtwo parts of the volume integrand can be equally resolved, obtaining the theorem as:
˚ Ω ∇ ·
(F) dV = ‹ Γ
F
·dA . (106)
Obtaining Green’s Theorem from the Divergence Theorem
Assuming only 2D vector fields, the Divergence Theorem results in Green’s Theorem.
For the reduced dimensionality, equation (106) reduces to:
¨ Ω
∇ · (F) dA =˛ ΓF · dsn , (107)
where Ω is now a surface domain, Γ is a line boundary and dsn is an outward boundaryvector, i.e. dsn = ds n̂. The latter is related to the tangential vector ds, used in theGreen’s Theorem, see equation (96), by a negative π2 rotation as:
dsn =
0 1−1 0
ds . (108)
Substituting equation (108) into equation (107) leads to:
¨ Ω
∂F 1(x)
∂x +
∂ F 2(x)
∂y
dx dy =
˛ Γ
((−F 2(x)) dx + F 1(x) dy) , (109)
which is equivalent to equation (96), for the vector fieldG = [G1 G2]T = [(−F 2) F 1]T.
37
-
8/21/2019 Mathematics Introduction for MSc
40/52
Stokes’ Theorem
Stokes’ Theorem relates surface integrals to line integrals. However, in its more general
form, the theorem relates integrals in dimension Rn to integrals in Rn−1.
Theory & Derivation
A strictly mathematical derivation can be found on pg. 331 of the recommended textbook.
The Graphical Derivation
Figure 8 depicts a generic surface, Ω, in a global frame of reference x = [x y z ]T, aswell as an infinitesimal surface element, Ωi, in a locally rotated coordinate system x
=[x y z]T. Firstly, the surface integral of the curl of the vector field F(x) is a scalarquantity, and hence coordinate frame invariant, i.e.:
¨ Ωi
∇ ×F(x) · dA =¨
Ωi
∇ ×Gx · dA , (110)where G(x) is the equivalent vector field in the local frame, with dA = dx dy k̂:
¨ Ωi
∇ ×F(x) · dA = ¨ Ωi
∂G2(x
)∂x
− ∂G1(x
)∂y
dx dy , (111)
to which Green’s Theorem can be directly applied, giving:
¨ Ωi
∇ ×F(x) · dA =˛ Γi
Gx · ds . (112)
Figure 8: Stokes’ Theorem on a surface
Secondly, summing over all infinitesimal domains, Ωi, noting that integrals alonginternal boundaries, Γ
i, cancel due to opposite directions of integration, and defining the
overall boundary Γ in the global reference frame, Stokes’ Theorem follows as:
¨ Ω
∇ ×F(x) · dA =˛ ΓF(x) · ds . (113)
The domain Ω must be a simply connected region and F(x) must not include singu-larities along Γ. Stokes’ Theorem is the most generalised theorem and includes the Diver-gence, Green’s and the 2nd Fundamental Theorem as special cases. Chapters 13.4-13.6
of the recommended textbook are highly suggested for further discussions and examples.
38
-
8/21/2019 Mathematics Introduction for MSc
41/52
Partial Differential Equations
NOTE: All figures for this section will be covered during lectures
Motivation & Objectives
PDE appear throughout physics and engineering. Below is a selection of import-ant PDE which govern different physical phenomena:
ψt + u ψx = 0 1D Advection Equationut + u ux − ν uxx = 0 Viscous Burgers’ Equation
ut + u ux = 0 Inviscid Burgers’ Equation
ut − uxx = 0 Heat Equationutt − uxx = 0 Wave Equation
ψxx + ψyy = 0 2D Laplace’s Equationψt + ψxxx + ψ ψx = 0 Kortewegde Vries Equation
Solution Strategies
The field of PDE is vast and many solution strategies are available, a few of themost popular analytical approaches include:
Separation of Variables
Transform Methods
Method of Characteristics
Similarity Solutions
h-Principle
Definitions & Notations
In general, any PDE can be expressed in the following form:
F x
, u(x
) ,
∂ u(x)
∂x1 , . . . ,
∂ u(x)
∂xn , . . . ,
∂ 2u(x)
∂xi, ∂x j , · · · = 0 , (114)where the vector x comprises all n problem variables and the operator F must not be confused with F from the previous vector calculus chapter. Adopting the
notation: ∂u
∂x = ux ,
∂ 2u
∂x2 = uxx, and assuming in this chapter that the variable
vector takes the form x = [x y]T ∈ R2, equation (114) can be restated as:F (x,y,u,ux, uy, uxx, uxy, uyy , . . . , uxxxy, · · · ) = 0 . (115)
39
-
8/21/2019 Mathematics Introduction for MSc
42/52
Order & Linearity
Two properties are primarily used to differentiate between different PDE. Thefirst is the Order which refers to the highest derivative in the given PDE. Thesecond is Linearity which states that linear combinations of existing solutions,u(x, y) and v(x, y), lead to new admissible solutions.
Define L to be the operator which represents equation (114), e.g. L = ∂
∂t − ∂ 2
∂x2for the heat equation. The PDE for the heat equation can then be rewritten as:
L(u) = 0 . (116)A PDE is linear if and only if (iff):
L(u + v) = L(u) + L(v) and L(ku) = k L(u) , (117)for any solution functions u, v and constant k. Hence, as examples, the heat, waveand Laplace equations are linear while the advection and Burgers’ equations aregenerally non-linear. Non-linearity can occur for instance if the solution, u, is part
of a derivative’s coefficient or if a derivative carries a power exponent. Familiaritywith identifying linearity is paramount for subsequent studies and modules.
Existence, Uniqueness & Boundary Conditions
Existence and uniqueness of solutions for PDE is beyond the scope of this in-troductory course. However, it is noted here that not all PDE problems are well
posed . Whether a problem is well posed or ill posed depends not only on thePDE structure alone, but on the combination of the PDE and the given BoundaryConditions (BC). Formally, the combination of a PDE with its BC is referred toas a Cauchy Problem. Please refer to the Cauchy - Kowalevski theorem for details
of existence and uniqueness.Facing any PDE, it is recommended to be careful by default, as even linear
PDE such as Laplace’s equation can become ill posed with some BC. Even if asolution to a PDE with given BC exists and is unique, it may feature undesirablephysical properties (weak vs. strong solutions).
Moreover it is important to be aware of the different types of Boundary Con-ditions, e.g. Neumann, Dirichlet , Robin, Cauchy and Mixed BC, to which PDEcan be subjected. Again, familiarity with the different types of BC is paramount.
40
-
8/21/2019 Mathematics Introduction for MSc
43/52
Degrees of Non-Linearity in 1st Order PDE
Let the reference 1st order PDE for most of this section be of the form:
a(x, y) ∂ u(x, y)
∂x + b(x, y)
∂ u(x, y)
∂y = c(x,y,u(x, y)) . (118)
The potential degree of non-linearity embedded in PDE of first order leads tothe following differentiations:
PDE Type Description
Linear Constant Coefficient a,b,c are constant functions
Linear a,b,c are functions of x and y only
Semi-Linear a, b functions of x and y, c may depend on u
Quasi-Linear a,b,c are functions of x, y and u
Non-Linear The derivatives carry exponents, e.g. (ux)2
,
or derivatives cross-terms exist, e.g. ux uy
Hence, equation (118) represents a semi-linear PDE, because it permits forlight non-linearities in the source term, c(x,y,u(x, y)).
Homogeneous & Inhomogeneous Linear PDE
A linear PDE is called homogeneous iff it can be written in the form:
L (u) = 0 , (119)which implies that the source term is nil, c(x, y) = 0, while inhomogeneous linearPDE can be written in the form:
L (u) = c(x, y) . (120)The inhomogeneous solution can sometimes be obtained directly or, often it
can be obtained by first finding the solution for a point source and then perform-ing a convolution with the given BC. The overall solution to an inhomogeneouslinear PDE can then be obtained by superposition of the homogeneous, uh and theinhomogeneous or particular solution, u p, as u = uh + u p. Homogeneous PDEare often related to conservation laws without sources or forcing, e.g. conserva-tion of mass with no source.
41
-
8/21/2019 Mathematics Introduction for MSc
44/52
Method of Characteristics - 1st Order PDE
Theory & Derivation
Assume the following linear first order PDE:
a(x, y) ∂ u(x, y)
∂x + b(x, y)
∂ u(x, y)
∂y = c(x, y) . (121)
The solution, u(x, y), is a surface such that z = u(x, y), which can be rewrit-ten in implicit form as 0 = u(x, y) − z . The vector gradient operator, ∇, on thissurface gives the normal vector, n, at every point as:
n =
∂u(x, y)
∂x
∂u(x, y)
∂y −1
T. (122)
Hence equation (121) can be written in the form:
∂u(x, y)
∂x∂u(x, y)
∂y
−1
· a(x, y)b(x, y)
c(x, y)
= n ·G = 0 . (123)
It follows that the vector field G is always in the tangential plane to u(x, y).Assume we can parametrise a curve, C, which lies in the solution plane and whichat every point satisfies the following system of ODE:
d
x
ds dy
ds dz
dsT
=
a(x, y) b(x, y) c(x, y)T
. (124)
A curve of this type is called integral curve of the vector field G, which inthe context of a PDE is known as the characteristic curves. The solution surfacecan then be reconstructed (traced) from the union of all characteristic curves.The PDE has been reduced to a system of ODE, equations (124), which can besolved. The parametrization variable can be eliminated from this system, and bysetting z = u, the Lagrange-Charpit equations can be obtained as:
dx
a(
x, y)
= dy
b(
x, y)
= du
c(
x, y)
, (125)
which can easily be extended to include non-linear cases. In case that c(x, y) = 0,from the third line in equations (124), it follows that u is constant and hencedu = 0. In any case it is possible to integrate:
dy
dx =
b(x, y)
a(x, y) , (126)
which, if drawn in the x-y base, results in the projected characteristic curves.
42
-
8/21/2019 Mathematics Introduction for MSc
45/52
Example: Advection Equation
Imagine a scalar quantity being propagated in 1D, in a constant velocity fieldof speed a, where the initial distribution of the scalar is prescribed along theboundary, Γ = {(x, t = 0)}, so that the complete problem is stated as:
ψt + aψx = 0 ,ψ(x, 0) = φ(x) .
(127)
Integration of equations (125) results in:
x = at + k1 ,u = k2 ,
(128)
where k1 and k2 are constants, such that the general solution can be expressed as:
x − at = k1 ,u = k2 = f (k1) = f (x − at) , (129)
where f is an arbitrary function depending on the BC. The BC MUST NOT beimposed on a curve tangential to the projected characteristic curves.
Example: Inhomogeneous PDE
Find the general solution to the following problem statement: y ux + x uy = x
2 + y2 ,u(x, 0) = 1 + x2 and u(0, y) = 1 + y2 ,
(130)
by drawing the projected characteristic curves and then showing that the par-ticular solution is different in different regions of the domain. The projected
characteristic curves are obtained as:
y dy = x dx ,y2 − x2 = k1 , (131)
while the source term leads to:
du = (x2 + y2) y−1dx = x dy + y dx = d(x y) ,u − xy = k2 , (132)
such that the general solution is u = x y + f (y2 − x2). The BC result in:
1 + x2
= f (−x2
) on y = 0 ,1 + y2 = f (y2) on x = 0 , (133)
so that the arbitrary function is f (t) = |t| + 1. Hence the full solution is:
u(x, y) = x y+1+y2 − x2 = u(x, y) = x y + 1 + y2 − x2 if y2 − x2 ≥ 0 ,
u(x, y) = x y + 1 + x2 − y2 if y2 − x2 ≤ 0 .(134)
This example shows the concept of regions of influence which will be revisitedfurther later on.
43
-
8/21/2019 Mathematics Introduction for MSc
46/52
Quasi-Linear PDE with Shocks and Expansion Fans
Consider the following inviscid Burgers’ problem statement:
ut + u ux = 0 ,
u(x, 0) = φ (x) =
0 if x < −1 ,1 if
−1
≤x
-
8/21/2019 Mathematics Introduction for MSc
47/52
Method of Characteristics - 2nd Order PDE
Assume the following second order PDE:
r(x, y) ∂ 2u(x, y)
∂x2 + 2 s(x, y)
∂ 2u(x, y)
∂x∂y + t(x, y)
∂ 2u(x, y)
∂y2 = q (x,y,u(x, y)) .
(136)
Equation (136) omits the first order derivatives but these are not vital and mayeasily be included if required. Noting the following two additional equations:
d(ux) = uxxdx + uxydy ,d(uy) = uyxdx + uyydy ,
(137)
equations (136) and (137) can be written as a system of the form: r(x, y) 2 s(x, y) t(x, y)dx dy 0
0 dx dy
uxxuxy
uyy
=
q (x, y)dux
duy
. (138)
Finally, noting that there MUST NOT be a unique solution for uxx, uxy anduyy , it follows that the coefficient matrix must be singular which occurs iff:
r(x, y)
dy
dx
2− 2 s(x, y)
dy
dx
+ t(x, y) = 0 . (139)
The two roots of equation (139) are:
dy
dx =
s(x, y) ±
s(x, y)2 − r(x, y) t(x, y)r(x, y)
, (140)
and these constitute a pair of differential equations which lead to the projected characteristics curves. Three fundamentally different behaviours of the PDE res-ult depending on the discriminant in equation (140).
1. Hyperbolic PDE s(x, y)2 > r(x, y) t(x, y) Real Distinct Roots
2. Parabolic PDE s(x, y)2 = r(x, y) t(x, y) Real Repeated Roots
3. Elliptical PDE s(x, y)2 < r(x, y) t(x, y) Complex Conjugate Roots
Variable Type PDE – Regions of Dependence & Influence
Elliptical PDE, such as Laplace’s equation, constitute Boundary Value Problem,where the solution at one specific point in the domain depends on the current solu-
tion everywhere in the domain. Conversely, in hyperbolic PDE, e.g. wave equa-tion, information travels (for time dependent problems) at a finite speed throughthe domain, so that the solution at a point has a finite part of the domain uponwhich it is dependent, the Region of Dependence. Similarly, the solution at thispoint will only affect parts of the domain, the Region of Influence. ParabolicPDE, e.g. heat equation, are similar to hyperbolic except that the informationpropagates at infinite speed. Parabolic and hyperbolic PDE are suited for time-marching schemes and are referred to as Initial Value Problems. For complexPDE, the classification into hyperbolic, parabolic or elliptical may not be trivial.
45
-
8/21/2019 Mathematics Introduction for MSc
48/52
A second order PDE, which is not a constant coefficient PDE, can changeits type throughout the simulation history or spacial domain. For example, thesteady Euler equation with irrotational flow, ∇ψ = u, can be expressed as:
1 − M 2
∂ 2ψ
∂s2 +
∂ 2ψ
∂n2 = 0 , (141)
where M is the Mach Number , s and n are coordinates along and normal to astreamline respectively. This PDE is elliptical in the sub-sonic, parabolic at thesonic and hyperbolic in the super-sonic regime.
Canonical Forms & Representative PDE
Each type of PDE can be, upon a transformation into characteristic variables ηand ζ , be expressed in its canonical form. It also follows that each PDE can beexpressed in a form similar to either the heat, wave or Laplace’s equation.
PDE Type Characteristic Families Representative PDE
Hyperbolic 2 distinct real η, ζ Wave: uηζ = 0
Parabolic 1 real (with one of choice) η, ζ Heat: uηη = 0
Elliptical 2 conjugate complex η = v + i w , ζ = η̄ Laplace: uvv + uww = 0