Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

47
Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt

Transcript of Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

Page 1: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

Introduction to Numerical Analysis Using

MATLAB

By

Dr Rizwan Butt

Page 2: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER ONENumber Systems and Errors

Introduction

It simply provides an introduction of numerical analysis.

Number Representation and Base of Numbers

Here we consider methods for representing numbers on computers.

1. Normalized Floating-point Representation

It describes how the numbers are stored in the computers.

Page 3: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 1 NUMBER SYSTEMS AND ERRORS

1. Human Error It causes when we use inaccurate measurement of data or

inaccurate representation of mathematical constants.

2. Truncation Error It causes when we are forced to use mathematical techniques

which give approximate, rather than exact answer.

3. Round-off Error This type of errors are associated with the limited number of digits

numbers in the computers.

Page 4: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 1 NUMBER SYSTEMS AND ERRORS

Effect of Round-off Errors in Arithmetic Operation

Here we analysing the different ways to understand the nature of rounding errors.

1. Rounding off Errors in Addition and Subtraction

It describes how addition and subtraction of numbers are performed in a computer.

2. Rounding off Errors in Multiplication

It describes how multiplication of numbers are performed in a computer.

Page 5: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 1 NUMBER SYSTEMS AND ERRORS

3. Rounding off Errors in Division

It describes how division of numbers are performed in a computer.

4. Rounding off Errors in Powers and roots

It describes how the powers and roots of numbers are performed in a computer.

Page 6: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER TWOSolution of Nonlinear Equations

Introduction

Here we discuss the ways of representing the different types of nonlinear equation f(x) = 0 and how to find approximation of its real root .

Simple Root’s Numerical Methods

Here we discuss how to find the approximation of the simple root (non-repeating) of the nonlinear equation f(x) = 0.

Page 7: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 2 SOLUTION OF NONLINEAR EQUATIONS

1. Method of Bisection This is simple and slow convergence method (but convergence is

guaranteed) and is based on the Intermediate Value Theorem. Its strategy is to bisect the interval from one endpoint of the interval to the other endpoint and then retain the half interval whose end still bracket the root.

2. False Position Method This is slow convergence method and may be thought of as an

attempt to improve the convergence characteristic of bisection method. Its also known as the method of linear interpolation.

Page 8: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 2 SOLUTION OF NONLINEAR EQUATIONS

3. Fixed-Point Method This is very general method for finding the root of nonlinear

equation and provides us with a theoretical framework within which the convergence properties of subsequent methods can be evaluated. The basic idea of this method is convert the equation f(x) = 0 into an equivalent form x = g(x).

4. Newtons Method This is fast convergence method (but convergence is not

guaranteed) and is also known as method of tangent because after estimated the actual root, the zero of the tangent to the function at that point is determined.

Page 9: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 2 SOLUTION OF NONLINEAR EQUATIONS

5. Secant Method This is fast convergence method (but not like Newton’s

method) and is recommended as the best general-purpose method. It is very similar to false position method, but it is not necessary for the interval to contain a root and no account is taken of signs of the numbers f(x_n).

Multiple Root’s Numerical Methods Here we discuss how to find approximation of multiple

root (repeating) of nonlinear equation f(x) = 0 and its order of multiplicity m.

Page 10: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 2 SOLUTION OF NONLINEAR EQUATIONS

1. First Modified Newtons Method

It can be useful to find the approximation of multiple root if the order of multiplicity m is given.

2. Second Modified Newtons Method

It can be useful to find the approximation of multiple root if the order of multiplicity m is not given.

Convergence of Iterative Methods

Here we discuss order of convergence of all the iterative methods described in the chapter.

Page 11: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 2 SOLUTION OF NONLINEAR EQUATIONS

Acceleration of Convergence

Here we discuss a method which can be applied to any linear convergence iterative method and acceleration of convergence can be achieved.

Systems of Nonlinear Equations

When we are given more than one nonlinear equation. Solving systems of nonlinear is a difficult task.

Newtons Method

We discuss this method for system of two nonlinear equations in two variables. For system of nonlinear equations that have analytical partial derivatives, this method can be used, otherwise not.

Page 12: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 2 SOLUTION OF NONLINEAR EQUATIONS

Roots of Polynomials

A very common problem in nonlinear equations is to find the roots of polynomial is discussed here.

1. Horner’s Method

It is one of the most efficient way to evaluate polynomials and their derivatives at a given point. It is helpful for finding the initial approximation for solution by Newton’s method. It is also quit stable.

Page 13: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 2 SOLUTION OF NONLINEAR EQUATIONS

2. Muller’s Method It is generalization of secant method and uses quadratic

interpolation among three points. It is a fast convergence method for finding the approximation of simple zero of a polynomial equation.

3. Bairstow’s Method It can be used to find all the zeros of a polynomial. It is one of the

most efficient method for determining real and complex roots of polynomials with real coefficients.

Page 14: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER THREESystems of Linear Equations

Introduction We give the brief introduction of linear equations, linear systems,

and their importance.

Properties of Matrices and Determinant To discuss the solution of the linear systems, it is necessary to

introduce the basic properties of matrices and the determinant.

Numerical Methods for Linear Systems To solve the systems of linear equations using the numerical

methods, there are two types of methods available, methods of first type are called direct methods and second type are called iterative methods.

Page 15: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 3 SYSTEMS OF LINEAR EQUATIONS

Direct Methods for Linear Systems The method of this type refers to a procedure for

computing a solution from a form that is mathematically exact. These methods are guaranteed to succeed and are recommended for general-purpose.

1. Cramers Rule This method is use for solving the linear systems by the

use of determinants. It is one of the least efficient method for solving a large number of linear equations. But it is useful for explaining some problems inherent in the solution of linear equations.

2

Page 16: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 3 SYSTEMS OF LINEAR EQUATIONS

2. Gaussian Elimination Method It is most popular and widely used method for solving linear

system. The basic of this method is to convert the original system into equivalent upper-triangular system and from which each unknown is determined by backward substitution.

2.1 Without Pivoting In converting original system to upper-triangular system if a

diagonal element becomes zero, then we have to interchange that equation with any below equation having nonzero diagonal element.

Page 17: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 3 SYSTEMS OF LINEAR EQUATIONS

2.2 Partial Pivoting

In using the Gauss elimination by partial pivoting (or row pivoting), the basic approach is to use the largest (in absolute value) element on or below the diagonal in the column of current interest as the pivotal element for elimination in the rest of that column.

2.3 Complete Pivoting

In this case we search for the largest number (in absolute value) in the entire array instead of just in the first column, and this number is the pivot. This means we need to interchange the columns as well as rows.

Page 18: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 3 SYSTEMS OF LINEAR EQUATIONS

3. Gauss-Jordan Method It is a modification of Gauss elimination method and is although

inefficient for practical calculation but is often useful for theoretical purposes. The basic idea of this method is to convert original system into diagonal system form.

4. LU Decomposition Method It is also a modification of Gauss elimination method and here we

decompose or factorize the coefficient matrix into the product of two triangular matrices (lower and upper).

Page 19: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 3 SYSTEMS OF LINEAR EQUATIONS

4.1 Dollittle’s method (l_ii = 1) Here the upper-triangular matrix is obtained by forward elimination

of Gauss elimination method and the lower-triangular matrix containing the multiples used in the Gauss elimination process as the elements below the diagonal with unity elements on the main diagonal.

4.2 Crout’s method (u_ii = 1) The Crout’s method, in which upper-triangular matrix has unity on

the main diagonal, is similar to the Dollittle’s method in all other aspects. The lower-triangular and upper-triangular matrices are obtained by expanding the matrix equation A = LU term by term to determine the elements of the lower-triangular and upper-triangular matrices.

Page 20: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 3 SYSTEMS OF LINEAR EQUATIONS

4.3 Cholesky method (l_ii = u_ii)

This method is of the same form as the Dollittle’s and Crout’s methods except it is limited to equations involving symmetrical coefficient matrices. This method provides a convenient method for investigating the positive definiteness of symmetric matrices.

Norms of Vectors and Matrices

For solving linear systems, we discuss a method for quantitatively measuring the distance between vectors in R^n and a measure of how well one matrix approximates another.

Page 21: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 3 SYSTEMS OF LINEAR EQUATIONS

Iterative Methods for Solving Linear Systems These methods start with an arbitrary first approximation to the

unknown solution of linear system and then improve this estimate in an infinite but convergent sequence of steps. This type of methods are used for large sparse systems and efficient in terms of computer storage and time requirement.

1. Jacobi Iterative Method It is a slow convergent iterative method for the linear systems.

From its formula, it is seen that the new estimates for solution are computed from the old estimates.

2. Gauss-Seidel Iterative Method It is a faster convergent iterative method than the Jacobi method

for the solution of the linear systems as it uses the most recent calculated values for all x_i.

Page 22: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 3 SYSTEMS OF LINEAR EQUATIONS

Convergence Criteria We discuss the sufficient conditions for the convergence of

Jacobi and Gauss-Seidel methods by showing l_∞-norm of their corresponding iteration matrices less than one.

Eigenvalues and Eigenvectors We briefly discuss the eigenvalues and eigenvectors of a matrix

and show how they can be used to describe the solutions of linear systems.

3. Successive Over-Relaxation Method It is useful modification of the Gauss-Seidel method. It is the best

iterative method of choice and needs to determine optimum value of the parameter.

Page 23: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 3 SYSTEMS OF LINEAR EQUATIONS

4. Conjugate Gradient Method It is very useful when employed as an iterative approximation

method for solving large sparse linear systems. The need for estimating parameter is removed in this method.

Conditioning of Linear Systems We discuss ill-conditioning of linear systems by using the

condition number of matrix. The best way to deal with ill-conditioning is to avoid it by reformulating the problem.

Iterative Refinement We discuss residual corrector method which can be used to

improve the approximate solution obtained by any means.

Page 24: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER FOURApproximating Functions

Introduction

We describe several numerical methods for approximating functions other than elementary functions. The main purpose of these numerical methods is to replace a complicated function by one which is simpler and more manageable.

Polynomial Interpolation for Uneven Intervals

The data points we consider here in a given functional relationship are not equally spaced.

Page 25: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 4 APPROXIMATING FUNCTIONS

1. Lagrange Interpolating Polynomials It is one of the popular and well known interpolation

method to approximate the functions at arbitrary point and provides a direct approach for determining interpolated values regardless of data spacing.

2. Newtons General Interpolating Formula It is generally more efficient than Lagrange polynomial

and it can be adjusted easily for additional data.

3. Aitkens Method It is an iterative interpolation method which is based on

the repeated application of a simple interpolation method.

Page 26: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 4 APPROXIMATING FUNCTIONS

Polynomial Interpolation for Even Intervals The data points we consider here in a given functional relationship

are equally spaced and polynomials are based on differences which are easy to use.

1. Newton’s Forward-Difference Formula It can be used for interpolation near the beginning of table values.

2. Newton’s Backward-Difference Formula It can be use for interpolation near the end of table values.

3. Some Central-Difference Formulas These can be used for interpolation in the middle of the table

values and among them are Stirling, Bessel, and Gauss formulas.

Page 27: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 4 APPROXIMATING FUNCTIONS

Interpolation with Spline Functions It is an alternative approach to divide the interval into a

collection of subintervals and construct a different approximating polynomial on each subinterval, called Piecewise Polynomial Approximation.

1. Linear Spline One of the simplest piecewise polynomial interpolation for

approximating functions and basic of it is simply connect consecutive points with straight lines.

2. Cubic Spline The most widely cubic spline approximations are patched

among ordered data that maintain continuity and smoothness and they are more powerful than polynomial interpolation.

Page 28: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 4 APPROXIMATING FUNCTIONS

Least Squares Approximation Least squares approximation which seeks to minimize the

sum (over all data) of the squares of the differences between function values and data values, are most useful for large and rough sets of data.

1. Linear Least Squares It defines the correct straight line as the one that minimizes

the sum of the squares of the distance between the data points and the line.

2. Polynomial Least Squares When data from experimental results are not linear, then we

find the least squares parabola and the extension to a polynomial of higher degree is easily made.

Page 29: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 4 APPROXIMATING FUNCTIONS

3. Nonlinear Least Squares In many cases, data from experimental tests are not linear,

then we fit to them two popular exponential forms y = ax^b and y = ae^(bx).

4. Least Squares Plane When the dependent variable is function of two variables, then

the least squares plane can be used to find the approximation of the function.

5. Overdetermined Linear Systems The least squares solution of overdetermined linear system

can be obtained by minimizing the l_2-norm of the residual.

Page 30: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 4 APPROXIMATING FUNCTIONS

6. Least Squares with QR Decomposition The least squares solution of the overdetermined linear

system can be obtained by using QR (the orthogonal matrix Q and upper-triangular matrix R) decomposition of a given matrix.

7. Least Squares with Singular Value Decomposition The least squares solution of the overdetermined linear

system can be obtained by using singular value (UDV^T, the two orthogonal matrices U, V and a generalized diagonal matrix D) decomposition of a given matrix.

Page 31: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER FIVEDifferentiation and Integration

Introduction Here, we deal with techniques for approximating numerically

the two fundamental operations of the calculus, differentiation and integration.

Numerical Differentiation A polynomial p(x) is differentiated to obtain p′(x), which is taken

as an approximation to f′(x) for any numerical value x.

Numerical Differentiation Formulas Here we gave many numerical formulas for approximating the

first derivative and second derivative of a function.

Page 32: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 5 DIFFERENTIATION AND INTEGRATION

1. First Derivatives Formulas For finding the approximation of the first derivative of a

function, we used two-point, three-point, and five-point formulas.

2. Second Derivatives Formulas For finding the approximation of the second derivative of a

function, we used three-point and five-point formulas.

3. Formulas for Computing Derivatives Here we gave many forward-difference, backward-difference,

and central-difference formulas for approximating the first and second derivative of the function.

Page 33: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 5 DIFFERENTIATION AND INTEGRATION

Numerical Integration Here, we pass a polynomial through points of a function and

then integrate this polynomial approximation to a function. For approximating the integral of f(x) between a and b we used Newton-Cotes techniques.

1. Closed Newton-Cotes Formulas For these formulas, the end-points a and b of the given

interval [a, b] are in the set of interpolating points and the formulas can be obtained by integrating polynomials fitted to equispaced data points.

1.1 Trapezoidal Rule This rule is based on integration of the linear interpolation.

Page 34: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 5 DIFFERENTIATION AND INTEGRATION

1.2 Simpson’s Rule This rule approximates the function f(x) with a quadratic

interpolating polynomial.

2. Open Newton-Cotes Formulas These formulas contain all the points used for approximating

within the open interval (a, b) and can be obtained by integrating polynomials fitted to equispaced data points.

3. Repeated use of the Trapezoidal Rule The repeated Trapezoidal rule is derived by repeating the

Trapezoidal rule and for a given domain of integration, error of the repeated Trapezoidal rule is proportional to h_2.

Page 35: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 5 DIFFERENTIATION AND INTEGRATION

4. Romberg Integration The Romberg integration is based on the repeated

Trapezoidal rule and using the results of repeated Trapezoidal rule with two different data spacings, a more accurate integral is evaluated.

5. Gaussian QuadraturesThe Gauss(-Legendre) quadratures are based on

integrating a polynomial fitted to the data points at the roots of a Legendre polynomial and the order of accuracy of a Gauss quadrature is approximately twice as high as that of the Newton-Cotes closed formula using the same number of data points.

Page 36: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

Introduction We discussed many numerical methods for solving first-order

ordinary differential equations and systems of first-order ordinary differential equations.

Numerical Methods for Solving IVP Here we discuss many single-step numerical methods and multi-

step numerical methods for solving the initial-value problem (IVP) and some numerical methods for solving boundary-value problem (BVP).

1. Single-Step Methods for IVP These types of methods are self-starting, refer to estimate y′(x)

from the initial condition and proceed step-wise. All the information used by these methods is consequently obtained within the interval over which the solution is being approximated.

CHAPTER SIXOrdinary Differential Equations

Page 37: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 6 ORDINARY DIFFERENTIAL EQUATIONS

1.1 Euler’s Method One of the simplest and straight forward but not an efficient

numerical method for solving initial-value problem (IVP).

1.2 Higher-Order Taylor’s Methods For getting higher accuracy, the Taylor’s methods are excellent

when the higher-order derivative can be found.

1.3 Runge-Kutta Methods An important group of methods which allow us to obtain great

accuracy at each step and at the same time avoid the need of higher derivatives by evaluating the function at selected points on each subintervals.

Page 38: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 6 ORDINARY DIFFERENTIAL EQUATIONS

2. Multi-Steps Methods for IVP

This type of methods make use of information about the solution at more than one point.

2.1 Adams Methods

These methods use the information at multiple steps of the solution to obtain the solution at the next x-value.

2.2 Predictor-Corrector Methods

These methods are combination of an explicit method and implicit method and they are consist of predictor step and corrector step in each interval.

Page 39: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 6 ORDINARY DIFFERENTIAL EQUATIONS

Systems of Simultaneous ODE Here, we require the solution of a system of simultaneous first-

order differential equations rather than a single equation.

Higher-Order Differential Equations Here, we deal the higher-order differential equation (nth-order)

and solve it by converting to an equivalent system of (n) first-order equations.

Boundary-Value Problems Here, we solve ordinary differential equation with known

conditions at more than one value of the independent variable.

Page 40: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 6 ORDINARY DIFFERENTIAL EQUATIONS

1. The Shooting Method It is based on by forming a linear combination of the

solution to two initial-value problems (linear shooting method) and by converting a boundary-value problem to a sequence of initial-value problems (nonlinear shooting method) which can be solved using the single steps method.

2. The Finite Difference Method It is based on finite differences and it reduces a

boundary-value problem to a system a system of linear equations which can be solved by using the methods discussed in the linear system chapter.

Page 41: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER SEVENEigenvalues and Eigenvectors

Introduction Here we discussed many numerical methods for solving

eigenvalue problems which seem to be a very fundamental part of the structure of universe.

Linear Algebra and Eigenvalues Problems The solution of many physical problems require the

calculations of the eigenvalues and corresponding eigenvectors of a matrix associated with linear system of equations.

Basic Properties of Eigenvalue Problems We discussed many properties concerning with eigenvalue

problems which help us a lot in solving different problems.

Page 42: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 7 EIGENVALUES AND EIGENVECTORS

Numerical Methods for Eigenvalue Problems Here we discussed many numerical methods for finding

approximation of the eigenvalues and corresponding eigenvectors of the matrices.

Vector Iterative Methods for Eigenvalues This type of numerical methods are most useful when matrix

involved be comes large and also they are easy means to compute eigenvalues and eigenvectors of a matrix.

1. Power Method It can be used to compute the eigenvalue of largest modules

(dominant eigenvalue) and the corresponding eigenvector of a general matrix.

Page 43: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 7 EIGENVALUES AND EIGENVECTORS

2. Inverse Power Method

This modification of the power method can be used to compute the smallest (least) eigenvalue and the corresponding eigenvector of a general matrix.

3. Shifted Inverse Power Method

This modification of the power method consists of by replacing the given matrix A by (A−μI) and the eigenvalues of (A−μI) are the same as those of A except that they have all been shifted by an amount μ.

Page 44: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 7 EIGENVALUES AND EIGENVECTORS

Location of Eigenvalues We deal here with the location of eigenvalues of both

symmetric and non- symmetric matrices, that is, the location of zeros of the characteristic poly nomial by using the Gerschgorin Circles Theorem and Rayleigh Quotient Theorem.

Intermediate Eigenvalues Here we discussed the Deflation method to obtain other

eigenvalues of a matrix once the dominant eigenvalue is known.

Eigenvalues of Symmetric Matrices Here, we developed some methods to find all eigenvalues of a

symmetric matrix by using a sequence of similarity transformation that transformed the original matrix into a diagonal or tridiagonal matrix.

Page 45: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 7 EIGENVALUES AND EIGENVECTORS

1. Jacobi Method It can be used to find all eigenvalues and corresponding eigenvectors

of a symmetric matrix and it permits the transformation of a matrix into a diagonal.

2. Sturm Sequence Iteration It can be used in the calculation of eigenvalues of any symmetric

tridiagonal matrix.

3. Given’s Method It can be used to find all eigenvalues of a symmetric matrix

(corresponding eigenvectors can be obtained by using shifted inverse power method) and it permits the transformation of a matrix into a tridiagonal.

4. Householder’s Method This method is a variation of the Given’s method and enable us to

reduce a symmetric matrix to a symmetric tridiagonal matrix form.

Page 46: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

CHAPTER 7 EIGENVALUES AND EIGENVECTORS

Matrix Decomposition Methods Here we used three matrix decomposition methods and find all the

eigenvalues of a general matrix.

1. QR Method In this method we decomposed the given matrix into a product

orthogonal matrix and a upper-triangular matrix which find all the eigenvalues of a general matrix.

2. LR Method This method is based upon the decomposition of the given matrix into

a product lower-triangular matrix (with unit diagonal elements) and a upper- triangular matrix.

3. Singular Value Decomposition Here we decomposed rectangular real matrix into a product of two

orthogonal matrices and generalized diagonal matrix.

Page 47: Introduction to Numerical Analysis Using MATLAB By Dr Rizwan Butt.

Appendices 1. Appendix A includes some mathematical preliminaries.

2. Appendix B includes the basic commands for software package MATLAB.

3. Appendix C includes the index of MATLAB programs and MATLAB built-in-

functions.

4. Appendix D includes symbolic computation and Symbolic Math Toolbox

functions.

5. Appendix E includes answers to selected odd-number exercises for all chapters.