Numerical Analysis – Linear Equations(II)

34
Numerical Analysis Numerical Analysis – Linear – Linear Equations(II) Equations(II) Hanyang University Jong-Il Park

description

Numerical Analysis – Linear Equations(II). Hanyang University Jong-Il Park. Singular Value Decomposition(SVD). Why SVD? G aussian Elim. and LU Decomposition fail to give satisfactory results for singular or numerically near singular matrices - PowerPoint PPT Presentation

Transcript of Numerical Analysis – Linear Equations(II)

Page 1: Numerical Analysis – Linear Equations(II)

Numerical Analysis – Numerical Analysis – Linear Equations(II)Linear Equations(II)

Hanyang University

Jong-Il Park

Page 2: Numerical Analysis – Linear Equations(II)

Singular Value Decomposition(SVD)Singular Value Decomposition(SVD)

Why SVD? Gaussian Elim. and LU Decomposition

fail to give satisfactory results for singular or numerically near singular matrices

SVD can cope with over- or under-determined problems

SVD constructs orthonormal basis vectors

Page 3: Numerical Analysis – Linear Equations(II)

What is What is SVD?SVD?

Any MxN matrix A whose number of rows M is greater than or equal to its number of columns N, can be written as the product of a MxN column-orthogonal matrix U, an NxN diagonal matrix W with positive or zero elements(the singular values), and the transpose of an NxN orthogonal matrix V:

A = U W VT

Page 4: Numerical Analysis – Linear Equations(II)

Properties of SVDProperties of SVD

Orthonormality UTU=I    => U-1=UT VTV=I    => V-1=VT

Uniqueness The decomposition can always be done,

no matter how singular the matrix is, and it is almost unique.

Page 5: Numerical Analysis – Linear Equations(II)

SVD of a square matrixSVD of a square matrix

SVD of a Square Matrix

columns of U an orthonormal set of basis vectors

columns of V whose corresponding wj's are zero

an orthonormal basis for the nullspace

Page 6: Numerical Analysis – Linear Equations(II)

Reminding basic concept in linear Reminding basic concept in linear algebraalgebra

Page 7: Numerical Analysis – Linear Equations(II)

Homogeneous equationHomogeneous equation

Homegeneous equations (b=0) + A is singular

Any columns of V whose corresponding wj is zero yields a solution

Page 8: Numerical Analysis – Linear Equations(II)

Nonhomogeneous equationNonhomogeneous equation

Nonhomegeneous eq. with singular A

where we replace 1/wj by zero if wj=0

※ The solution x obtained by this method is the solution vector of the smallest length.

※ If b is not in the range of A SVD find the solution SVD find the solution x x in the in the least-square senseleast-square sense, i.e. x which minimize r = |Ax-b|

Page 9: Numerical Analysis – Linear Equations(II)

SVD solution - conceptSVD solution - concept

Page 10: Numerical Analysis – Linear Equations(II)

SVD – under/over-determined problemsSVD – under/over-determined problems

SVD for Fewer Equations than Unknowns

SVD for More Equations than Unknowns SVD yields the least-square solution In general, non-singular

Page 11: Numerical Analysis – Linear Equations(II)

Applications of SVDApplications of SVD

Applications 1. Constructing an orthonormal basis

M-dimensional vector space Problem: Given N vectors,

find an orthonormal basis

Solution: Columns of the matrix U are the desired orthonormal basis

2. Approximation of Matrices

Page 12: Numerical Analysis – Linear Equations(II)

Vector normVector norm

Page 13: Numerical Analysis – Linear Equations(II)

ll22 norm norm

Page 14: Numerical Analysis – Linear Equations(II)

l norml norm8

Page 15: Numerical Analysis – Linear Equations(II)

Distance between vectorsDistance between vectors

Page 16: Numerical Analysis – Linear Equations(II)

Natural matrix normNatural matrix norm

Eg.

Page 17: Numerical Analysis – Linear Equations(II)

l norm of a matrixl norm of a matrix8

Page 18: Numerical Analysis – Linear Equations(II)

Eigenvalues and eigenvectorsEigenvalues and eigenvectors

* To be discussed later in detail.

Page 19: Numerical Analysis – Linear Equations(II)

Spectral radiusSpectral radius

Page 20: Numerical Analysis – Linear Equations(II)

Convergent matrix equivalencesConvergent matrix equivalences

Page 21: Numerical Analysis – Linear Equations(II)

Convergence of a sequenceConvergence of a sequence An important connection between the eigen

values of the matrix T and the expectation that the iterative method will converge

spectral radius

Page 22: Numerical Analysis – Linear Equations(II)

Iterative Methods - Jacobi IterationIterative Methods - Jacobi Iteration

Jacobi Iteration

Page 23: Numerical Analysis – Linear Equations(II)

Jacobi IterationJacobi Iteration

Initial guess

Convergence Condition The Jacobi iteration is convergent, irrespective of an initial guess, if the matrix A is diagonal-dominant:

Page 24: Numerical Analysis – Linear Equations(II)

Eg. Jacobi iterationEg. Jacobi iteration

Page 25: Numerical Analysis – Linear Equations(II)
Page 26: Numerical Analysis – Linear Equations(II)

Gauss-Seidel IterationGauss-Seidel Iteration

Idea Utilize recently updated

Iteration formula

Convergence Condition The same as the Jacobi iteration

Advantage over Jacobi iteration Fast convergence

Page 27: Numerical Analysis – Linear Equations(II)

Eg. Gauss-Seidel iterationEg. Gauss-Seidel iteration

Page 28: Numerical Analysis – Linear Equations(II)

Jacobi vs. Gauss-SeidelJacobi vs. Gauss-Seidel Comparison: Eg. 1 vs. Eg. 2

Eg. 1 Jacobi

Eg. 2 Gauss-SeidelFaster convergence

Page 29: Numerical Analysis – Linear Equations(II)

Variation of Gauss-Seidel IterationVariation of Gauss-Seidel Iteration

Successive Over Relaxation(SOR) fast convergence Well-suited for linear problem

Successive Under Relaxation(SUR) slow convergence stable Well-suited for nonlinear problem

Page 30: Numerical Analysis – Linear Equations(II)

Eg. Gauss-Seidel vs. SOREg. Gauss-Seidel vs. SOR

Page 31: Numerical Analysis – Linear Equations(II)

((cont.)cont.)

Faster convergence

Page 32: Numerical Analysis – Linear Equations(II)

Iterative Improvement of a SolutionIterative Improvement of a Solution Iterative improvement

exact solution contaminated solution wrong product Algorithm derivation

since

Solving the equation to get the improvement

Repeat this procedure until a convergence

Page 33: Numerical Analysis – Linear Equations(II)

Numerical Aspect of Iterative Numerical Aspect of Iterative ImprovementImprovement Numerical aspect

Using LU decomposition Once we get a decomposition, just a

substitution using will give the incremental correction

(Refer to mprove() in p.56 of NR in C)

Measure of ill-conditioning A’: normalized matrix If A’-1 is large ill-conditioned

Read Box 10.1

Page 34: Numerical Analysis – Linear Equations(II)

Application: Circuit analysisApplication: Circuit analysis Kirchhoff’s current and voltage law

Current rule: 4 nodesVoltage rule: 2 meshes

6 unknowns, 6 equations