MS2200 Anum Part 1
-
Upload
yusuf-rahim -
Category
Documents
-
view
48 -
download
0
Transcript of MS2200 Anum Part 1
Rac 1
MS2200Numerical Analysis & Programmingu e ca a ys s & og a g
Dr. Ir. Rachman SetiawanEngineering Design Centre(Mechanical Design Lab.)
Tel. 2500979 ext. 102E-mail: [email protected]
0. Lectures0.1 Schedule
Monday 14.00 – 15.00 (4102) Thursday 14.00 – 16.00 (4102)
0.2 Evaluation Quiz - 15% Homework/Assignment - 20% Mid-test - 30% Final test - 35%
2Rac 2009
0.3 Reference BookSteven C. Chapra dan Raymond P. Canale, “Numerical Methods for
Engineers”, Fourth ed., Mc Graw Hill International Ed., 2002 (available at HMM).
Rac 2
0.4 Etcetera There is 5% bonus (max.), proportional to and for min.
attendance of 70% Two hour Lecture: Two-hour Lecture:
2 parts Break between parts
Programming lab./assignment will use MATLAB, or equivalent freeware (FreeMath)
Limit: Students: 10 min.; Lecturer: 15 min.
3Rac 2009
0.5 Syllabus Introduction to Numerical Analysis, Computers & Basic
programmingE Errors
Roots of Polynomials & Equations Matrices & System of Algebraic Equations Optimization Curve Fittings: Interpolation, Regression Numerical Integration & Differentiation Differential Equations: ordinary partial optional
4Rac 2009
Differential Equations: ordinary, partial optional
Rac 3
0.6 Structure of Text Book Part One: Modelling, Computers, and Error Analysis Part Two: Roots of Equations Part Three: Linear Algebraic Equations Part Four: Optimization Part Five: Curve Fitting Part Six: Numerical Differentiation and Integration Part Seven: Ordinary Differential Equations Part Eight: Partial Differential Equations
5Rac 2009
1. Intro. to Numerical Analysis1.1 Definition
Numerical analysis is:yOne of the methods of analysis that consists of techniques to solve mathematical equations with
arithmetical calculation
6Rac 2009
Rac 4
Problem/
1.2 Methods in Engineering Problem Solving
Computer aid
Problem/ Reality
Empirical/
experiments
Simulation
/Theory
Analytical
7Rac 2009
Computer aid
Computer aid Numerical Solution
ySolution
1.3 History
8Rac 2009
Rac 5
9Rac 2009
10Rac 2009
Rac 6
1.4 Objectives To provide students with a sound introduction to some
numerical methods to solve practical engineering problemsT i d i l ( ) d To introduce programming language(s) to students, or enhance student’s programming skills
11Rac 2009
2 Intro. to Computers & Programming
2.1 Computers: Electronic device Takes input Process by calculations Delivers output: numbers, graphs, sound
12Rac 2009
Rac 7
2.2 Categories By the size and capability:
Super computer Mainframe Mini computer Micro computer Personal computer: Desk top, Laptop Palm top computer Programmable calculator
13Rac 2009
14Rac 2009
Rac 8
By their function in networks: Server Workstation Cli t Client
15Rac 2009
By the CPU architecture IBM compatible Apple UNIX UNIX
16Rac 2009
Rac 9
2.3 Components Hardware
Input Peripheral: Keyboard, Mouse/track-ball/track-point/touch-screen, k di it l i d i ( ) d b b i di i kamera digital, pemindai (scanner), modem, berbagai media penyimpan data, mic, berbagai sensor yang dihubungkan oleh card/PCMCIA dll.
Output Peripheral: Hard copy (printer), storage, modem, soft copy I/O ports: COM, PS/1, PS/2 (utk mouse, keyboard), serial, paralel (printer,
scanner, dll), USB (universal serial bus), IEEE1394 (firewire/i-link), infra-red.
Central Processing Unit (CPU): Pentiums, AMD, AMD64, Core Duo RAM (Random Access Memory) Berbagai cards, a.l. Graphic, sound, modem/LAN, special purpose cards Storage: Hard disk
17Rac 2009
Power supply Motherboard, tempat komponen a – f dihubungkan melalui I/O ports,
expansion slots (ISA, PCI, AGP), bus dan kabel-2. Komponen-2 c – h disebut system unit.
Display: monitor, LCD, TFT.
18Rac 2009
Comp.
Rac 10
19Rac 2009
Comp.
MB.
20Rac 2009
Comp.
MB.
Rac 11
21Rac 2009
Comp.
MB.
Software Operating system: Windows, DOS, Solaris, LINUX etc. Application: Ms Office, AutoCAD, Ansys, Matlab, Winamp
22Rac 2009
Rac 12
23Rac 2009
Brainware Hardware manufacturer Programmer/software builder
24Rac 2009
Rac 13
3. Programming Basics3.1 Why programming?
To translate mathematical algorithm in numerical methods into a language that computers understand
Physical problems
Mathematical model
Arithmetical model
25Rac 2009
Numerical solution
Computer Program
Algorithm
3. Programming Basics3.2 Computer Programs
Computer programs are a set of instructions that direct the computer to perform a certain task.Cl ifi i Classification: High-level: Programming language e.g. Fortran, Basic, C,
etc. Low-level: machine language
Programming topics: Simple information representation (constants, variables etc.) Advanced information reps. (data structure, arrays, records)
26Rac 2009
Mathematical formulae Input/output Logical representation Modular programming
Rac 14
3. Programming Basics3.3 Structured Programming
Structured program is a set of rules that prescribe good style habits for the programmerA f S d i Apart from Structured programming: Top-down programming Modular
Characteristics: Systematic, easier to understand Easier to debug, test and modify Requires computers that can translate it to unstructured
27Rac 2009
Requires computers that can translate it to unstructured version before running it
3. Programming Basics3.4 Communications
Algorithm: a set of steps to instruct a computer to perform a certain taskFl h i l/ hi l i f Flow-chart: a visual/graphical representation of an algorithm
Pseudo-code: an alternative approach to express an algorithm that bridges the gap between flow-chart and computer code
28Rac 2009
Rac 15
3. Programming Basics Comparison among the three:
Inputs1. Ask Inputs2 Perform
Inputs
Instruction 1
Condition?
Y
N
2. Perform Instruction 1 to inputs
3. If conditionsatisfied then store the result of instruction 1
4. If condition not satisfied then
DOInstruction 1IF condition THEN storeELSE instruction 1
ENDDO
29Rac 2010
storeredo instruction 1 with the input of the previous output from instruction 1
No Symbol Meaning
1 Process
2 Input / Output
3 Selection
4 Sub program
5 Start/end terminal
6 Connector
7 Direction of processl
30Rac 2009
8 Manual process
9 Page separator
10 Data storage
Rac 16
3. Programming Basics3.5 Programming Methodology
Planning (algorithm, flow-chart) Writing the code Debugging and testing the program Making Remarks / commenting Storing and maintaining the program
31Rac 2009
3. Programming Basics3.6 Programming Strategy
Main program is designed Top-down approach with steps easiliy understoodS b i d i d d l (M d l ) i h Subprograms is designed as modules (Modular), with detail programming is arranged in a structured form
Both in main and sub-programs, it is suggested to use indentation as a realisation of the concept of structured programming
32Rac 2009
Rac 17
3. Programming Basics3.7 Logical Representation
SequenceUnless directed otherwise, the computer code is inplemented
i t ti t tione instruction at a time
Instruction 1
Instruction 2
33Rac 2009
Instruction 3
3. Programming Basics
SelectionA means to split the program’s flow into branches based on the
outcome of a logical condition
34Rac 2009
Rac 18
3. Programming Basics
RepetitionA means to perform a certain task for a number of times until a
certain condition is met.
35Rac 2009
3. Programming Basics
36Rac 2009
Rac 19
3. Programming Basics3.8 Programming Language
C++ Visual Basic Application (VBA) Matlab/SciLab Matlab/SciLab
37Rac 2009
3. Programming Basics* Programming Topics
Introduction & familiarisation Matrices in Matlab Mathematical Operation Graphic plotting
38Rac 2009
Rac 20
4. Errors4.1 Introduction
Computers-Round-off-Chopping
Num. Method-Truncation
Human-Modelling/Formulation err.-Data uncertainty-Blunder
39Rac 2009
4. Errors
Human ErrorError on Num method
Human ErrorHuman Error
40Rac 2010
Human ErrorError on Computing
Rac 21
4. Errors4.2 Definition of Error
Error is the discrepancy between the true value and the approximate value (in this case, generated from computational analysis)analysis)
Presisi
cd
41Rac 2009Akurasi
a b
4. Errors Precision how close the measurement/computational
results among each other Accuracy how close the overall results to the true value R f i t th h b f Referring to the graph before:
Figure a: precision LOW; accuracy LOW Figure b: precision LOW; accuracy HIGH Figure c: precision HIGH; and accuracy LOW Figure d: precision HIGH; accuracy HIGH
42Rac 2009
Rac 22
4. Errors4.3 Formulation for Error
xt : True valuex : Approximate valuexi : Current approximate valuexi-1 : Previous approximate valuet : True percentage relative errora : Approximate percentage relative error
%100t xx
43Rac 2009
%100
%100
1
i
iiia
tt
x
xx
x
4. Error: Round-off & Chopping4.4 Numbers in Computer
Before disscussing Error due to computer, it is better to disscuss How Numbers are Represented in Computer.T f b d b C Types of number represented by Computer: Base-n
n digits Example Base-10:
(8 x 104) + (6 x 103) + (4 x 102) + (0 x 101) + (9 x 100)=86,409 Example Base-2:
1010112 =(1x25)+(0x24)+(1x23)+(0x22)+(1x21)+(1x20)
44Rac 2009
(1x2 )+(0x2 )+(1x2 )+(0x2 )+(1x2 )+(1x2 )=(1x32)+(0x16)+(1x8)+(0x4)+(1x2)+(1x1)=32 + 0 + 8 + 0 + 2 + 1 = 4310
Rac 23
4. Error: Round-off & Chopping Floating point
x = (±)m × be
(±) = Sign m = Mantissa,m Mantissa, b = Base-number e = Signed exponent
Example: +0,2345 × 10-2 = 0,002345
1 0 1 1 0 1 1 1 0 0 1
Sgn of Magnitude
- 57 x 26 = -3648
45Rac 2009
b = 2e = + (1 x 22 + 1 x 21 + 0 x 20) = +6m = - (1 x 25 + 1 x 24 + 1 x 23 + 0 x 22 + 0 x 21 + 1 x 20) = -57
Sgn of number
Sgn of exponent
Magnitude of exponent Magnitude
of mantissa
4. Error: Round-off & Chopping4.5 Characteristics of Numbers in Computer
Limited range of quantities Finite number of quantities within range Interval between numbers
Error (computer)
46Rac 2009
Chopping Round-off
Rac 24
4. Error: Round-off & Chopping4.6 Round-off and Chopping
3.141592654……
3.1416 (Round-off)
3.1415 (Chopping)
47Rac 2009
4. Error: Truncation
4.7 Taylor Series Truncation error is related with How numerical methods
work Review:
Functions / Formulae Approximate relationship Solved with arithmetical/algebraic operations
Example: ii
iii
xx xx
xfxfxf
dx
xdf
i
1
1'
48Rac 2009
One popular approach is Taylor Series
iixx i 1
Rac 25
iiiii h
xfh
xfhxfxfxf
321 ...
!3
'''
!2
'''
4. Error: Truncation Taylor Series Expansion used in Numerical analysis:
ii
nn
n
nni
n
xxh
hn
fR
Rhn
xf
1
11
!1
!
!3!2
Remainder (nth term)
Step size, constant for Numerical Method (can be adaptable in more advanced Num
49Rac 2009
(can be adaptable in more advanced Num. Meth’s).
a value of x that lies between xi and xi+1
4. Error: Truncation The original series has infinite number of terms In the application of Numerical methods, it would not
be possible Truncated The truncated series is now consisting of Error that is The truncated series is now consisting of Error, that is
no other than, the Remainder (Rn). But now it is called Truncation Error
It would not be possible nor necessary to know exactly the truncation error is,
1 nn hOR
50Rac 2009
But it is sufficient to know that the truncation error is proportional to (step size)n+1
Rac 26
4. Error: Truncation The remainder itself has infinite terms,
iiiii h
xfh
xfhxfxfxf ...
!3
'''
!2
''' 32
1
mR
nin
mim
mim
iii
hn
xfh
m
xfh
m
xf
fff
!1...
!1!
!3!2
11
1
therefore needed to be truncated:
51Rac 2009
m = finite number of termn = infinite number of termRm = Remainder after m term
11
!1
mi
m
m hm
xfR
4. Error: TruncationExample
Aft T ti
nin
iiiii h
n
xfh
xfh
xfhxfxfxf
!1...
!3
'''
!2
''' 32
1
After Truncation:
h
R
h
xfxfxf
Rhxfxfxf
iii
iii
11
11
'
'
First-order
approximationTruncation error
52Rac 2009
approximation
First-derivative(Approximated)
Rac 27
4. Error: Truncation
4.8 Numerical differentiation (Finite divided difference) Forward Difference Approximation of the First Derivative
hOh
fxf
xxOxx
xfxfxf
ii
iiii
iii
'
' 11
1
53Rac 2009
4. Error: Truncation Backward Difference Approximation of the First Derivative
hOh
fxf
xxOxx
xfxfxf
ii
iiii
iiii
'
' 11
54Rac 2009
Derivation: Eqns: 4.19 and 4.20
Rac 28
4. Error: Truncation Centred Difference Approximation of the First Derivative
211
2' hO
h
xfxfxf ii
i
More accurate
55Rac 2009
Derivation: Eqns: 4.21 and 4.22
4. Error: Truncation Finite Difference Approximation of Higher Derivatives
hxf
hxfxfxf iiii
212 ...2
!2
''2'
or
hOh
xfxfxfxf
hOh
xfxfxfxf
iiii
iiii
22
11
212
2''
2''
!2
Forward Diff.
56Rac 2009
hh
xfxfh
xfxf
xf
iiii
i
11
''
Centred Diff.
More detail Fig. 23.3
Rac 29
4. Error: Truncation4.9 Error Propagation
Error can propagate through mathematical functions For Functions of a Single Variable, the estimate error of
the function, due to error of variable x,
xxxf
xfxfxf~~'
~~
xf ~ xx ~
57Rac 2009
4. Error: Truncation For Multivariable Functions
xf
xf
xf
xxxf ~~~~~~
Example 4.6
nn
n xx
xx
xx
xxxf ...,...,, 22
11
11
58Rac 2009
Rac 30
4. Error: Truncation4.10 Stability and Condition
Another application of Error propagation in Numerical Method is Stability
Computation is numerically unstable if the effect of the error of Computation is numerically unstable if the effect of the error of input values are grossly magnified by the Numerical Method
And the quantity that represents the stability is Condition:
Condition number 1 relative error in function is identical to that of the variable.
xf
xfxNumberCondition ~
~'~
59Rac 2009
Condition number > 1 relative error of the variable is amplified Condition number < 1 relative error of the variable is
attenuated Table 4.3 & Example 4.7
4. Error: Truncation4.11 Numerical Error
The total numerical error is the Summation of the Truncation and Round-off errors
60Rac 2009
Rac 31
4. Error: Truncation There is no Systematic and General approach But here are a number of practical programming
guidelines: U t d d i i ith ti Use extended-precision arithmetic Avoid subtracting two nearly equal numbers, by
rearranging or reformulate the problem Predict numerical error Verification / Validation with known (theoretical/empirical)
result Tuning on some parameters, like step size, weighting
factors coefficients etc
61Rac 2009
factors, coefficients etc.
4. Error: Truncation4.12 Human Errors
Assumption/Formulation Error Data Uncertainty/Error Blunder
62Rac 2009
Rac 32
4. Error: Truncation4.13 Characteristics of Numerical Methods
Number of initial guess Rate of convergence Stability Accuracy & Precision Breadth of application Special Requirements Programming efforts required
63Rac 2009
5. Roots of Equations5.1 Introduction
F(x)
64Rac 2009
x
x2 = ?x1 = ?
F(x) = 0
Rac 33
5. Roots of Equations
5.2 Application
Fundamental Dependant Independent ParametersPrinciple Variable variable
Heat balance Temperature Time and position Thermal properties and geometry
Mass balance Concentration or mass quantity
Time and position Chemical behaviour, mass transfer coefficients, geometry
Force balance Magnitude and direction of forces
Time and position Strength, structural properties, geometry
Energy balance Changes in Time and position Thermal properties mass
65Rac 2009
Energy balance Changes in kinetic/potential energy states
Time and position Thermal properties, mass, geometry
Newton’s laws of motions
Acceleration, velocity, or position
Time and position Mass, spring, damper, geometry
Kirchoff laws Current and voltage Time Electrical properties
5. Roots of Equations5.3 Mathematical Background
Algebraic function
01 fyfyfyf nn
Polynomials as simpler form of Algebraic function
Transcendental functions Logarithmic Exponential
0... 011 fyfyfyf nn
nnn xaxaxaaxf ...2
210
66Rac 2009
Exponential Trigonometric etc
Rac 34
5. Roots of Equations Standard methods for locating the roots of equations:
Determination of the real roots of Algebraic and Transcendental equations Single root from foreknowledge of its approximate location Single root from foreknowledge of its approximate location
Determination of all of real and complex roots of polynomials For polynomials
67Rac 2009
5. Roots of Equations5.4 Methods
Bracketing method (Graphical) Bisection False-position
Open method Simple fixed-point iteration Newton-Rhapson Secant
Issues
68Rac 2009
Issues Algorithm Convergence: Termination criteria, Error estimates Pitfalls
Rac 35
5. Roots of Equations5.5 Bracketing Method
Locating the root from the change of sign by guessing The guesses are set within a range and covering the root
i lfitself
• The change in sign between f(xl) and f(xu)
• Requires algorithm to predict the xr
69Rac 2009
xl
xu
5. Roots of Equations5.5.1 (Graphical method)
A good visualisation of Bracketing method
As a rough estimation to be used As a rough estimation to be used as initial guess for the “real numerical method”
Example 5.1
70Rac 2009
Rac 36
5. Roots of Equations5.5.2 Bisection method
Same concept as Graphical method, except now in a systematic way
Test: 0xfxf Test:
The next prediction is the mid-point between upper and lower bound:
0. ul xfxf
2ul
r
xxx
71Rac 2009
The rest of algorithm is in Fig. 5.5 The basic pseudocode can be found in Fig. 5.10, with the
modification as in Fig. 5.11
xl xux1
x1xux2
x1x2
x3
72Rac 2009
x3x1
x4
Rac 37
5. Roots of EquationsAlgorithm for Bisection
73Rac 2009
Effect of improvement on Programming (Fig. 5.10 vs Fig. 5.11)
Function Bisection(input)Iter = 0DOxrold=xrxr=(xl+xu)/2i i 1
Function Bisection(input)Iter = 0fl=f(xl)DOxrold=xr
( l )/2iter=iter+1IF xr ~= 0
ea=(ABS(xr-xrold)/xr)*100%ENDtest=f(xl)*f(xr)IF test<0
xu=xrELSEIF test>0
xl = xrELSE
ea=0
xr=(xl+xu)/2fr=f(xr)iter=iter+1IF xr ~= 0
ea=(ABS(xr-xrold)/xr)*100%ENDtest=fl*frIF test<0
xu=xrELSEIF test>0
xl = xr
74Rac 2009
ENDIF ea < es OR iter >=imax EXIT
ENDDOBisection = xrEND
fl = frELSE
ea=0ENDIF ea < es OR iter >=imax EXIT
ENDBisection = xrEND
Rac 38
5. Roots of Equations5.5.3 False-position method
Also called as Regula Falsi or, in a more ‘intelligent’ way, Linear Interpolation Methodh d l l l d d The prediction is linearly-proportional, and expressed as:
ul
uluur xfxf
xxxfxx
75Rac 2009
5. Roots of Equations The termination criteria:
si
iia e
x
xx
%100.
1
1
Pitfalls of False-position
ix 1
76Rac 2009
Rac 39
5. Roots of Equations5.6 Open Methods
It requires 1 (one) initial guess It converges faster than the Bracketing methods
77Rac 2009
x0
x1
5. Roots of Equations However, sometimes it does not converge (it diverges),
that is when a poor initial guess is selected
78Rac 2009
x0 x1
Rac 40
5. Roots of Equations5.6.1. Simple fixed-point iteration
It requires Re-formulation of the problem:
xf 0
In a numerical form
Example:
ii xgx
xgx
f
1
x
x
ex
xexf
0
79Rac 2009
In a numerical form:
xg
ixi ex 1
5. Roots of Equations Graphical representation: Two-curve Graphical method Convergence (Box 6.1):
Convergent 1' xg
alproportionLinear
' :Error
error Oscillated 0'
error Monotonic0'
Divergent1'
g
,1, itit ExgE
xg
xg
xg
g
80Rac 2009
p p
Rac 41
5. Roots of Equations5.6.2 The Newton-Raphson Method
Arguably, the most widely-used root locating method It uses a first-derivative relation, to form a numerical algorithm:
It requires a pre-determined first derivative of the function (explicitly)
It lacks a general convergence criterion The accuracy is predicted as quadratic convergence:
i
iii xf
xfxx
'1
81Rac 2009
The accuracy is predicted as quadratic convergence:
21 ii EOE
Derivation: Fig. 6.5 & Eq. 6.5 – 6.6
5. Roots of Equations5.6.3 The Secant Method
The modification in the N-R method, with the first derivative approximated numerically, by using Backward-finite difference.
The numerical form is therefore: The numerical form is, therefore:
Or, with the pre-determined step-size, x:
ii
iiiii xfxf
xxxfxx
1
11
ii xxf
82Rac 2009
iii
iiii xfxxf
fxx
1
Rac 42
5. Roots of Equations5.6.4 Convergence
83Rac 2009
5. Roots of Equations5.6.5 Multiple Roots
0113
0375) 23
xxxxf
xxxxfa
Some problems in finding multiple roots of an equation include:
root) triple(1;3:Roots
01113
0310126)
root) double(1;3:Roots234
xx
xxxxxf
xxxxxfb
xx
84Rac 2009
Bracketing method: There is no change in sign Newton-Raphson and Secant methods: f (x) and f’
(x) = 0 Solution: Ralston & Rabinowitz method
Rac 43
5. Roots of Equations First modification of N-R:
i
iii xf
xfmxx
'1 requires foreknowledge of m, i.e. multiplicity of the root
Second modification of N-R Ralston-Rabinowitz (Eq. 6.13):
iii
iiii
xfxfxf
xfxfxx
"'
'21
85Rac 2009
6. Roots of Polynomials6.1 Introduction
Example of polynomial:
nf 2
Example of polynomial in engineering: Free-Vibration problem:
nnn xaxaxaaxf ...2
210
0
rtex
kxxcxm
86Rac 2009
r1 and r2 ??? Root-locating problem, of more specifically, an Eigenvalue problem
02 rtekcrmr
Rac 44
6. Roots of Polynomials6.2 Conventional Methods
Numerical methods are normally used to locate the complex roots of polynomialsA i l h d i i i i l As a numerical method using initial guess: Bracketing methods slow convergence Open methods possibility of non-convergence
(divergence) In the case of complex polynomials:
Bracketing method not applicable (change of complex sign??)
87Rac 2009
Newton-Raphson is more suitable (with the programming language capability of complex numbers)
6. Roots of Polynomials6.2.1 Muller’s method
Recall the Secant method straight line projection of a point to the x-axis)
Muller’s method uses parabolic projection (higher order Muller s method uses parabolic projection (higher order approximation):
If linear approximation requires 2 points (backward finite difference), parabolic Muller’s method requires 3 points: [x0, f(x0)]
[x1, f(x1)]
cxxbxxaxf 22
22
88Rac 2009
[x2, f(x2)]
Rac 45
6. Roots of Polynomials Algorithm:
Initial guesses: Step size and first derivative:
210 ,, xxx
1201
121010
;
;
xfxfxfxf
xxhxxh
a, b, c :
Root estimator:
Output New set of parabola points:
21101
01 ;; xfcahbhh
a
1
10
0 ;hh
acbb
cxx
4
2223
10 xxnew
89Rac 2009
Output New set of parabola points:
And the error:
Repeat until error is minimised
%100.2
22
n
ona x
xx
32
21
xx
xxnew
new
6. Roots of Polynomials Algorithm:
Initial guesses: Step size and first derivative:
210 ,, xxx
1201
121010
;
;
xfxfxfxf
xxhxxh
a, b, c :
Root estimator:
Output New set of parabola points:
21101
01 ;; xfcahbhh
a
1
10
0 ;hh
acbb
cxx
4
2223
10 xxnew
90Rac 2009
Output New set of parabola points:
And the error:
Repeat until error is minimised
%100.2
22
n
ona x
xx
32
21
xx
xxnew
new
Rac 46
6. Roots of Polynomials The algorithm repeats with the ever changing
until the approximate error, , reaches stopping criteria, .
For the result of x for each iteration the selection of the
210 ,, xxxa
s For the result of x3 for each iteration, the selection of the
new set of points is governed by the two general strategies as follows: If only real roots of x3 are located, the nearest two original
point are chosen If both real and complex is found,
32
21
10
xx
xx
xx
n
on
on
91Rac 2009
6. Roots of Polynomials6.2.2 Bairstow Method
A rather different method from the previous ones, based on factorisation of polynomials:
2f n
Division by mononomial term (x – t) yields:
Or with quadratic term
11
2210
...
...
rxrxrxxf
xaxaxaaxf
nnn
nnn
123211 ...
nnn xbxbxbbxf
srxx2
92Rac 2009
Or, with quadratic term . srxx
Rac 47
6. Roots of Polynomials Bairstow Algorithm (quadratic factor):
1. Choose initial guess: r and s.2. Calculate bi’s:
3. Calculate ci ’s:
0to2for 21
11
nisbrbab
rbab
ab
iiii
nnn
nn
b
bc nn
93Rac 2009
4. Solve the linear algebraic equation for r and s, and new r and s.
1to2for 21
11
niscrcbc
rcbc
iiii
nnn
6. Roots of Polynomials Bairstow Algorithm (quadratic factor), cotd.:
5. Check the error, proceed if not yet sufficient6. Find the roots of srxx 2
94Rac 2009
Rac 48
7. Linear Algebraic Equations7.1 Introduction
nn
b
bxaxaxa 11212111 ...
n
n
nnnnnn
nn
b
b
x
x
aaa
aaa
bxaxaxa
bxaxaxa
2
1
2
1
22221
11211
2211
22222121
...
...
...
...
n
baaa
baaa
...
...
222221
111211
95Rac 2009
nnnnnn
n
bxaaa
22
21
22221
...
nnnnn
n
baaa
baaa
...
...
21
222221
7. Linear Algebraic Equations7.2 Gauss Elimination
There are two steps: Forward Elimination and Back Substitution In the first step, a Pivot equation is selected, with the aii is the
pivot coefficient to find new coefficient:pivot coefficient, to find new coefficient:
Once, completed the result would be a simple, explicit relations to be solved, then…
Formula for Back substitution:
kjkk
ikijij a
a
aaa (See pseudocode for details)
96Rac 2009
Formula for Back substitution:
1...,,2,111
11
nnifor
a
xab
xiii
n
ijj
iij
ii
i
Rac 49
7. Linear Algebraic Equations7.2.1 Pseudocode:
DO k=1, n-1DO i=k+1, nfactor = ai,k / ak,kDO j = k+1 n
1131211
b
baaa
DO j = k+1, nai,j = ai,j – factor. ak,j
ENDDObi = bi – factor.bk
ENDDOENDDO
xn=bn/an,nDO i=n-1,1,-1
sum=0 333
23222
1131211
3333231
2322221
""00
'''0
ba
baa
baaa
baaa
baaa
Forward Elimination
97Rac 2009
sum=0DO j=i+1,nsum=sum+ai,j .xjENDDOxi = (bi - sum)/ai,i
ENDDO 1131321211
2233222
3333
'''
""
axaxabx
axabx
abx
Back Substitution
7. Linear Algebraic Equations7.2.2 Pitfalls of Gauss Elimination:
Division by 0 (Pivot coefficient) Round-off error Ill-condition (Fig. 9.2), due to:
Matrix Determinant 0 Matrix determinant = 0 singular, infinite solution
98Rac 2009
Rac 50
7. Linear Algebraic Equations7.2.3 Improvements
More significant figure, to resolve round-off error problem Pivoting strategy, to avoid division by zero (Example 9.9)
The equation with the largest coefficient shall be the pivoting equation
Scaling, to resolve round-off error problem (Example 9.10) Modifying the order of magnitude of one or more
equations so that all equations have roughly similar order of magnitude
Ill-conditioned problems are much more difficult and d d l
99Rac 2009
requires advanced treatment. However, most real engineering problems normally well-conditioned, so if you find ill-condition problems check and recheck the physical formulation
7. Linear Algebraic Equations7.3 Gauss-Jordan
Modification of Gauss-elimination The first step is modified, so that all unknown is eliminated
h h j h b Thi l i rather than just the subsequent one. This results in an Identity matrix
Hence, there is no need for Back-substitution But, the elimination step requires even more work,
resulting in more computation than Gauss method, in total.
100Rac 2009
Rac 51
Algorithm for Gauss-Jordan: Normalise the first row by the
coefficient a11
2322221
1131211
baaa
baaa
7. Linear Algebraic Equations
With the use of the normalised version of the fisrt row, eliminate the first column of each of the remaining rows (ai1)
Normalise a22’
Eliminate ai2’ and a32
’
And so on until the last column
1
3333231
""001
process
neliminatio Forward
b
baaa
101Rac 2009
And so on until the last column and row (amn)
3
2
""100
""010
b
b
7. Linear Algebraic Equations Comparison with Gauss Elimination:
Conceptually simpler (no apparent “back substitution” process)
All pitfalls in Gauss elimination prevails All pitfalls in Gauss elimination prevails But, relatively more computation (approx. 50% more) Still used in some numerical methods.
102Rac 2009
Rac 52
7. Linear Algebraic Equations* Favorite Problems
9.79.9P9.14
103Rac 2009
7. Linear Algebraic Equations7.4 LU Decomposition
01
001
0 212322
131211
luu
uuu
The Forward elimination stage, both in Gauss and Gauss-Jordan is a time-consuming process
With LU Decomposition this stage is tried to be simplified by
1
0
00
0
3231
21
33
2322
ll
l
u
uu
Upper matrix Lower matrix
104Rac 2009
With LU Decomposition, this stage is tried to be simplified, by not computing the right-hand side of equations.
This, particularly beneficial if we have the same system with the same [A] but different set of {b}. Example: In FE model, we have the same shape of structure but different force system.
Rac 53
7. Linear Algebraic EquationsDescription of Process:
aaa
aaa
A
232221
131211
LU
ff
f
ll
lL
a
aa
aaa
u
uu
uuu
U
aaa
aaaA
1
01
001
1
01
001
"00
''0
00
0
2121
33
2322
131211
33
2322
131211
333231
232221
105Rac 2009
AUL
ffll
:check
11 32313231
7. Linear Algebraic EquationsProgramming Strategy:
The Decompose process is to form a matrix [A’] with the following example form, So that, the computer only stores one matrix but will be used according to the procedure [U], [L] :[L] :
SUB Decompose (a,n)DO k = 1,n-1
DO i = k+1, nfactor=ai,k/ak,kai,k = factorDO j k+1
nnnn
n
ull
uul
uuu
A
21
232221
11211
'
106Rac 2009
DO j=k+1,nai,j = ai,j – factor*ak,j
ENDDOENDDO
ENDDOEND Decompose
Rac 54
7. Linear Algebraic EquationsBasic Algorithm:
During elimination phase, factors that are calculated are stored as lijP i l i i i i l d i d f h Partial pivoting strategy is implemented, instead of the whole row
While the equations are not scaled, scaling is used to determine if pivoting is to be implemented
The diagonal terms are monitored for near-zero occurances in order to raise singularity warning.
107Rac 2009
The Substitution Process is by obtaining {D} and finally {X}, using the following relationships:
1ld n
121
,,3,2
1
1
1
nnifor
xad
x
adx
niforbadd
ld
n
ijjiji
nnnn
i
jjijii
nnnn
108Rac 2009
1,,2,1 nnifora
xii
i
Rac 55
7. Linear Algebraic EquationsExample 10.1 – 10.2
293333.000333.70
2.01.03
3.071.0
2.01.03
A
10000000.03
3.0
03333333.03
1.0
0120.1000
293333.000333.70
2.01.03
02.1019.00102.03.0
31
21
f
f
U
109Rac 2009
10271300.010000000.0
0103333333.0
001
0271300.000333.7
19.03
32
31
L
f
7. Linear Algebraic Equations
d
d
BDL 3.19
85.7
0103333333.0
001
2
1
T
x
x
x
DXU
D
d
dBDL
0843.70
5617.19
85.7
0120.1000
293333.000333.70
2.01.03
0843.705617.1985.7
4.71
3.19
10271300.010000000.0
0103333333.0
3
2
1
3
2
110Rac 2009
TX 00003.75.23
3
Rac 56
7. Linear Algebraic Equations7.5 Crout Decomposition
Components of the lower traingular matrix [L], li,j are the same as ai,j.
With comparable computational
131211
aaa
aaa
A With comparable computational effort, Crout decomposition is conceptually-simpler than LU (Doolittle) decomposition
With similar strategy, [L] and [U] matreices can be compacted to [A’] to save the memory usage.
Moreover, since the original [A] matrix is never used for further
23
1312
333231
232221
00
100
10
1
a
u
uu
U
aaa
aaaA
111Rac 2009
matrix is never used for further calculation, [A’] can replace [A] to be only one matrix, again saving memory
333231
2221
21
0
00
aaa
aa
a
L
7. Linear Algebraic EquationsNumerical Relationships
nial ii ,...,2,1for 1,1,
njji
ula
u
n,,j,jiulal
nj
nja
au
j
ijikjijk
kj
j
kkjikjiji
jj
21for
1for
:,...,3,2For
,...,3,2for
1
1
1
1,,
11
1,1
112Rac 2009
n,,jjil
ujj
kj ,21for,
Rac 57
7. Linear Algebraic Equations7.6 Matrix Inverse
11 IAAAA
33
23
131
3
32
22
121
2
31
21
111
1
332331
232221
1312111
0
0
;1
0
;0
1
010
001
'
'
'
;
'
'
'
;
'
'
'
'''
'''
'''
iii
IIII
a
a
a
A
a
a
a
A
a
a
a
A
aaa
aaa
aaa
A
113Rac 2009
31
321
211
1
321
;;
1
0;
0
1;
0
0
100
010
iiiiii
iii
IAAIAAIAA
IIII
7. Linear Algebraic Equations7.7 Error Analysis & System Condition
Application of Inverse Matrix for Ill-condition matrix: S l th t i [A] I t t i [A] th Scale the matrix [A]; Invert matrix [A]; compare the
order of magnitude of elements of [A]-1 for Ill-condition matrix
if not, then Ill-condition
if not, then Ill-condition
??1 IAA
??11 AA
114Rac 2009
Rac 58
7. Linear Algebraic Equations7.7 Error Analysis & System Condition
Matrix Norm: Euclidian norms, for vectors Frobenius norms, for general matrices Uniform vector norm Row-sum norm
Matrix Condition Number:
1 AAACond
115Rac 2009
Application of Cond[A] Error estimate of {X}:
A
AACond
X
X
7. Linear Algebraic Equations7.7 Special Matrices
Special (Square) Matrices: Diagonal system Banded system (e g Tridiagonal) Fig 11 1 Banded system (e.g. Tridiagonal) Fig. 11.1
They are often found in finite element problems, and requiring solving with a minimum computational effort.
116Rac 2009
Rac 59
7. Linear Algebraic Equations Thomas algorithm an efficient algorithm for such matrices
'
111211
gfe
gf
aaa
aa
RXABXA
,,3,2
';
1,
1
111
444
333
222
1111
454443
343332
232221
ag
nkforfee
af
fe
gfe
gfe
gfe
A
aa
aaa
aaa
aaa
A
kkk
kkk
nn
Matrix Transformation
117Rac 2009
1,,2,1
,,3,2
1
1
11
nnkforf
xgrx
frx
nkforrerr
br
k
kkkk
nnn
kkkk Forward Substitution
Backward Substitution
7. Linear Algebraic Equations Symmetric Matrices: aij = aji or, [A]=[A]T. Therefore, only requires half the storage
118Rac 2009
Rac 60
7. Linear Algebraic Equations7.8 Iterative Approach
Up to now, the problem solving methods for linear algebraic equations are based on deterministic approach.Th ( f d ff d They are accurate (apart from round-off error and error propagation) and reasonably efficient for problems with small matrices
However, for large matrices, they are no longer efficient Gauss-Seidel and Jocobi methods use Iterative approach,
requiring a set of initial guesses
119Rac 2009
7. Linear Algebraic Equations The approach is based from the fundamentals of the linear
algebraic equations, with initial guesses: x1, x2, …, xn
nn xaxaxabxbxaxaxa 13132121 ...
nn
nnnnn
nn
nnnnnnnn
nnnn
nn
a
xaxaxa
a
bxbxaxaxa
a
xaxaxa
a
bxbxaxaxa
aaxbxaxaxa
11,132112211
22
2323121
22
2222222121
1111111212111
......
......
...
120Rac 2009
nnnn
The first Iteration
Ax=b
Rac 61
7. Linear Algebraic Equations
121Rac 2009
Gauss-Seidel Jacobi
7. Linear Algebraic Equations The difference between Gauss-Seidel and Jacobi is the first
iteration, (Fig. 11.4) Stopping criteria:
ji
ji xx 1
As always in an iterative approach, a problem of divergence may appear.
Convergence Criteria:
sji
iiia x
xx %100,
n
jjiii aa
1,
122Rac 2009
Some improvement techniques of Gauss-Seidel is Relaxation:
oldi
ngi
nsi xxx 1
ijj
Rac 62
Main Gauss
i > n
start
a, b, n, x, tol, er
Eliminate
er = 0
i i 1
i = 1Y
A
Eliminate
j > n
E d
i = i+1
S(i) = ABS [a (i,1) ]
j = j+1
j = 2
er ≠ -1
Substitute
X
Y
N
NN
Y
N
123Rac 2009
a (i,j) > s (i)End
S(i)=ABS[a(i,j)]
A
Y
Substitute
start
A, n, b, x
Xn=
i ≤ -1i=i+1
i=n-1
Sum = 0
j ≤ n i = 1+1
Y
N
Y
xi = (bi – sum)/ai,i
124Rac 2009
j ≤ n
sum =sum+ a (i,j)*x(j)
End
j=j+1
N
Rac 63
8. OPTIMISATION8.1 Introduction
Essentially, similar to root location, optimisation process seeks for a point on a function.
The difference: The difference: Root location to find a root that gives the function zero:
Optimisation to find a root that gives minimum or maximum value of the function:
0xf
0' xf
125Rac 2009
Why minimising/maximising?? Minimising weight of material Minimising cost Maximising performance …
8. OPTIMISATION Example: Cantilever Beam
Optimisation Problem: Design a minimum weight of cantilever beam (profile: I-
beam) that meet failure and defelection criteria
126Rac 2010
Design variables : b, h, tDesign parameters :E, , Sy, L, FObjective function : A.L
Rac 64
8. OPTIMISATION General Components in Optimisation Problems:
xxxf n,...,, :Max/Min 21 Objective Function
xxxg
xxxh
xxxh
xxxh
n
nl
n
n
0,...,,
0,...,,
0,...,,
0,...,, :Subject to
211
21
212
211
Equality constraints
127Rac 2010
nixxx
xxxg
xxxg
uii
li
nm
n
,,2,1
0,...,,
0,...,,
21
212
Bound constraints
Inequality constraints
Design variables
8. OPTIMISATIONClassification:
The presence of Constraints: Constrained Optimisation Unconstrained Optimisation Unconstrained Optimisation
Number of Design variables: One-dimensional Multi-dimensional
Nature of Objective function & constraints: Linear Programming (if Objective function and Constraints
Fig. PT 4.4
128Rac 2009
Linear Programming (if Objective function and Constraints are linear)
Quadratic Programming (if Objective function is quadratic and Constraints are linear)
Non-linear Programming (if Objective function and Constraints are nonlinear)
Rac 65
8. OPTIMISATION8.2 Application
129Rac 2009
8. OPTIMISATION8.2 Application
130Rac 2009
Rac 66
8. OPTIMISATION8.3 One Dimensional
Golden-Section Search
1521 ll
Recall: Bisection & False-position
Quadratic Interpolation Curve around the extreme is approximated by quadratic
curve, f2(x0, x1, x2, f(x0), f(x1), f(x2))
2
15 d
1
2
21
1
lll
131Rac 2009
, 2( 0, 1, 2, ( 0), ( 1), ( 2)) Find the maximum of the f2, to obtain x3. Repeat the process with new datapoint, (x3, f(x3))…
8. OPTIMISATION8.3 One Dimensional
Golden Section Algorithm: Initial guess, xu and xl
Find golden section distance:
dxx
dxx
xxd
u
l
lu
2
1
2
15
If f(x1) > f(x2) xl new = x2
If f(x1) < f(x2) xu new = x1
Repeat the above 3 steps until it reaches convergence (x1 x2)
132Rac 2010
Rac 67
8. OPTIMISATION8.3 One Dimensional
Quadratic Interpolation: Take three initial guess Formulate quadratic equation Find the maximum/minimum by differentiate it, resulting
in:
102021210
21
202
20
221
22
210
3 222 xxxfxxxfxxxf
xxxfxxxfxxxfx
133Rac 2009
8. OPTIMISATION8.3 One Dimensional
General Flow Chart
Initial guess
Error check
Selection for the next evaluation
N
Y
Optimum
134Rac 2009
Find next guess
Rac 68
8. OPTIMISATION8.3 One Dimensional
Newton Method Recall Newton-Rhapson method to obtain the root of
tiequations Only, now f’(x) = 0, instead of f(x) = 0.
Function, f, must be differentiable twice to find non zero f”
xf
xfxx ii "
'1
135Rac 2009
8. OPTIMISATION8.4 Multidimensional, Unconstrained
Most optimisation problems involve multi-dimensional
Classification: Non-gradient approach (Direct Method), e.g. (Pseudo)
random search, Univariate & Pattern Gradient Based approach, e.g. Gradient & Hessian,
Steepest Ascent/Descent
136Rac 2009
Rac 69
8. OPTIMISATION8.4 Multidimensional, Unconstrained
Pseudo random: For each design variable, calculate:
Then: f(x..) Locate the maximum/minimum
rxxxx lul Where r is a random number between 0 to 1, generated by computer
137Rac 2009
8. OPTIMISATION8.4 Multidimensional, Unconstrained
Univariate: Set Initial guess Perform 1D search while fixing the other variables. Repeat for each variable, once at a time Repeat he above until reaching the maximum/minimum
138Rac 2009
Rac 70
8. OPTIMISATION8.4 Multidimensional, Unconstrained
Gradient, Hessian: Directional derivative or Gradient directs us to the
t t d/d d ( l ) t d steepest ascend/desced (slope) towards maximum/minimum for each numerical step
T
nx
f
x
f
x
ff
xxx
x 21
139Rac 2009
8. OPTIMISATION8.4 Multidimensional, Unconstrained
Gradient, Hessian: The behaviour of ascending or descending can be checked
i H i t i t tusing Hessian matrix test:
If and f has a local minimum If and f has a local maximum
yx
f
y
f
x
fH
.
2
2
2
2
2
0H 022 xf
0H 022 xf
140Rac 2009
If f has a saddle point
For evaluation of gradient and Hessian, one can use analytical or numerical approach
0 xf0H
Rac 71
8. OPTIMISATION8.4 Multidimensional, Unconstrained
Steepest Ascend/Descend:Proceeding from the gradient approach, the complete step by
t i f llstep is as follows.. Set initial guess Find the directional gradient Formulate one-dimensional directional function:
xf
hx
fxh
x
fxh
x
fxfhg
nn,,,
22
11
141Rac 2009
Check for the maximum/minimum/saddle point existence Find the optimum step size, h by differentiating g(h) Find the next step of xi. Repeat the process
8. OPTIMISATION8.5 Linear, Constrained
Linear ProgrammingBound: 1 – 6 represent limit of solution, whilst the dashed
li t i l f bj ti f ti Zline represent iso-value of objective function, Z.
142Rac 2009
Rac 72
8. OPTIMISATION8.5 Constrained
Possible Cases in Linear Programming
143Rac 2009
Unique solution Alternate solution No feasible solution Unbounded problems
8. OPTIMISATION8.5 Constrained
Graphical Solution
Approximate optimum solution: x1 = 10; x2 = 7
144Rac 2009
Rac 73
8. OPTIMISATION8.5 Constrained
Simplex Method Based on graphical solution, Simplex uses assumption that
th ti l ti li t i tthe optimum solution lies on an extreme point Constraint equations equalities, by introducing slack
variables Form the optimisation into system of linear algebraic
equations Solve it with Gauss-Jordan
145Rac 2009
8. OPTIMISATION8.5 Constrained
Simplex Method for Linear Programming
Objective function minimization jijij bxgxg '0
k bxxg *
Objective function minimization Inequality constraints
Basic variables (real design variables), xi.
Introducing slack variables, xk so that turning inequality constraints to Equality constraints
146Rac 2009
jkij bxxg to Equality constraints
Rac 74
8. OPTIMISATION Simplex Method for Linear Programming (cotd.)
Then build a tableu:
V i blB i i bll k
nmm
baa
xxxxx
10
001 12212
121
xg j*
VariablesBasic VariablesSlack
Modified constraints
147Rac 2010
kb xf * Modified objective functions
8. OPTIMISATION Example of Simplex Method
Using Gauss-Jordan, solve the tableu into:
121 nmm xxxxx
*1
*
*11,1
121
000
100
00001
j
jjn
m
nmm
b
ba
ba
jna
Then, solution:
with the minimum objective function of:
148Rac 2010
**22
*11 ; mm bxbxbx
*1 jbf
Rac 75
8. OPTIMISATION Example of Simplex Method
Optimisation Problem:
5250900990 f X
0,
7063:
253:
5.86.04.0:
5250900990:max
21
213
212
211
21
xx
xxg
xxg
xxg
xxf
X
X
X
X
149Rac 2010
8. OPTIMISATION Example of Simplex Method
Tableu :
VariablesBasic VariablesSlack
7010063
2501013
5.80016.04.054321
bxxxxx
jna
Variables Basic VariablesSlack
150Rac 2010
5250000900990
Rac 76
8. OPTIMISATION Example of Simplex Method
After Gauss-Jordan, the Tableu becomes:
V i blB i VariablesSlack
4285.61428.01428.0010
4762.100476.02857.0001
4524.01047.00285.010054321
bxxxxx
jna
VariablesBasic VariablesSlack
Solution: to give objc function, f = 21407.88
151Rac 2010
88.2140771.17528.154000
4285.6;4762.10 21 xx