Numerical Analysis – Linear Equations(II)

Post on 31-Jan-2016

38 views 1 download

Tags:

description

Numerical Analysis – Linear Equations(II). Hanyang University Jong-Il Park. Singular Value Decomposition(SVD). Why SVD? G aussian Elim. and LU Decomposition fail to give satisfactory results for singular or numerically near singular matrices - PowerPoint PPT Presentation

Transcript of Numerical Analysis – Linear Equations(II)

Numerical Analysis – Numerical Analysis – Linear Equations(II)Linear Equations(II)

Hanyang University

Jong-Il Park

Singular Value Decomposition(SVD)Singular Value Decomposition(SVD)

Why SVD? Gaussian Elim. and LU Decomposition

fail to give satisfactory results for singular or numerically near singular matrices

SVD can cope with over- or under-determined problems

SVD constructs orthonormal basis vectors

What is What is SVD?SVD?

Any MxN matrix A whose number of rows M is greater than or equal to its number of columns N, can be written as the product of a MxN column-orthogonal matrix U, an NxN diagonal matrix W with positive or zero elements(the singular values), and the transpose of an NxN orthogonal matrix V:

A = U W VT

Properties of SVDProperties of SVD

Orthonormality UTU=I    => U-1=UT VTV=I    => V-1=VT

Uniqueness The decomposition can always be done,

no matter how singular the matrix is, and it is almost unique.

SVD of a square matrixSVD of a square matrix

SVD of a Square Matrix

columns of U an orthonormal set of basis vectors

columns of V whose corresponding wj's are zero

an orthonormal basis for the nullspace

Reminding basic concept in linear Reminding basic concept in linear algebraalgebra

Homogeneous equationHomogeneous equation

Homegeneous equations (b=0) + A is singular

Any columns of V whose corresponding wj is zero yields a solution

Nonhomogeneous equationNonhomogeneous equation

Nonhomegeneous eq. with singular A

where we replace 1/wj by zero if wj=0

※ The solution x obtained by this method is the solution vector of the smallest length.

※ If b is not in the range of A SVD find the solution SVD find the solution x x in the in the least-square senseleast-square sense, i.e. x which minimize r = |Ax-b|

SVD solution - conceptSVD solution - concept

SVD – under/over-determined problemsSVD – under/over-determined problems

SVD for Fewer Equations than Unknowns

SVD for More Equations than Unknowns SVD yields the least-square solution In general, non-singular

Applications of SVDApplications of SVD

Applications 1. Constructing an orthonormal basis

M-dimensional vector space Problem: Given N vectors,

find an orthonormal basis

Solution: Columns of the matrix U are the desired orthonormal basis

2. Approximation of Matrices

Vector normVector norm

ll22 norm norm

l norml norm8

Distance between vectorsDistance between vectors

Natural matrix normNatural matrix norm

Eg.

l norm of a matrixl norm of a matrix8

Eigenvalues and eigenvectorsEigenvalues and eigenvectors

* To be discussed later in detail.

Spectral radiusSpectral radius

Convergent matrix equivalencesConvergent matrix equivalences

Convergence of a sequenceConvergence of a sequence An important connection between the eigen

values of the matrix T and the expectation that the iterative method will converge

spectral radius

Iterative Methods - Jacobi IterationIterative Methods - Jacobi Iteration

Jacobi Iteration

Jacobi IterationJacobi Iteration

Initial guess

Convergence Condition The Jacobi iteration is convergent, irrespective of an initial guess, if the matrix A is diagonal-dominant:

Eg. Jacobi iterationEg. Jacobi iteration

Gauss-Seidel IterationGauss-Seidel Iteration

Idea Utilize recently updated

Iteration formula

Convergence Condition The same as the Jacobi iteration

Advantage over Jacobi iteration Fast convergence

Eg. Gauss-Seidel iterationEg. Gauss-Seidel iteration

Jacobi vs. Gauss-SeidelJacobi vs. Gauss-Seidel Comparison: Eg. 1 vs. Eg. 2

Eg. 1 Jacobi

Eg. 2 Gauss-SeidelFaster convergence

Variation of Gauss-Seidel IterationVariation of Gauss-Seidel Iteration

Successive Over Relaxation(SOR) fast convergence Well-suited for linear problem

Successive Under Relaxation(SUR) slow convergence stable Well-suited for nonlinear problem

Eg. Gauss-Seidel vs. SOREg. Gauss-Seidel vs. SOR

((cont.)cont.)

Faster convergence

Iterative Improvement of a SolutionIterative Improvement of a Solution Iterative improvement

exact solution contaminated solution wrong product Algorithm derivation

since

Solving the equation to get the improvement

Repeat this procedure until a convergence

Numerical Aspect of Iterative Numerical Aspect of Iterative ImprovementImprovement Numerical aspect

Using LU decomposition Once we get a decomposition, just a

substitution using will give the incremental correction

(Refer to mprove() in p.56 of NR in C)

Measure of ill-conditioning A’: normalized matrix If A’-1 is large ill-conditioned

Read Box 10.1

Application: Circuit analysisApplication: Circuit analysis Kirchhoff’s current and voltage law

Current rule: 4 nodesVoltage rule: 2 meshes

6 unknowns, 6 equations