Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White,...
-
Upload
claud-caldwell -
Category
Documents
-
view
216 -
download
1
Transcript of Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White,...
![Page 1: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/1.jpg)
Direct Methods for Sparse Linear Systems
Lecture 4
Alessandra Nardi
Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy
![Page 2: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/2.jpg)
Last lecture review
• Solution of system of linear equations
• Existence and uniqueness review
• Gaussian elimination basics– GE basics– LU factorization– Pivoting
![Page 3: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/3.jpg)
Outline
• Error Mechanisms
• Sparse Matrices – Why are they nice– How do we store them– How can we exploit and preserve sparsity
![Page 4: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/4.jpg)
Error Mechanisms
• Round-off error– Pivoting helps
• Ill conditioning (almost singular)– Bad luck: property of the matrix– Pivoting does not help
• Numerical Stability of Method
![Page 5: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/5.jpg)
Ill-Conditioning : Norms
• Norms useful to discuss error in numerical problems
• Norm V:
Vyxyxyx
Vxxx
Vxxx
, if )3(
, if )2(
,0 if 0 )1(
![Page 6: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/6.jpg)
L2 (Euclidean) norm :
n
iixx
1
2
2
L1 norm :
L norm :
n
iixx
11
ii
xx max
12x
Unit circle
Unit square
1
1 11x
1
x
Ill-Conditioning : Vector Norms
![Page 7: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/7.jpg)
Vector induced norm : Axx
AxA
xx 1maxmax
n
iij
jAA
11
max
Induced norm of A is the maximum “magnification” of by
= max abs column sum
n
jij
iAA
1
max = max abs row sum
2A = (largest eigenvalue of ATA)1/2
x A
Ill-Conditioning : Matrix Norms
![Page 8: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/8.jpg)
Ill-Conditioning : Matrix Norms
• More properties on the matrix norm:
• Condition Number:
-It can be shown that:-Large (A) means matrix is almost singular (ill-conditioned)
BAAB
I
1
)( 1 AAA
1)( A
![Page 9: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/9.jpg)
What happens if we perturb b?
Ill-Conditioning: Perturbation Analysis
xxxbbb
b
bM
x
x
b
bMM
x
x
xMb
bM
b
bbMx
bMx
bMx
bxM
bbxMMx
bbxxM
)(
)(
1
11
1
1
(M) large is bad
![Page 10: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/10.jpg)
What happens if we perturb M?
Ill-Conditioning: Perturbation Analysis
xxxMMM
M
MM
xx
x
M
MMM
xx
x
bxxMM
)(
))((
1
(M) large is bad
Bottom line:If matrix is ill-conditioned, round-off puts you in troubles
![Page 11: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/11.jpg)
Geometric Approach is more intuitive
1 2 1 1 2 2[ ], Solving is finding M M M M x b x M x M b
2
2|| ||
M
M
1
1|| ||
M
M
2
2|| ||
M
M
1
1|| ||
M
M
When vectors are nearly aligned,Hard to decide how much of versus how much of 1M
2M
Vectors are orthogonal
1x
2x
1x
2x
Ill-Conditioning: Perturbation Analysis
Vectors are nearly aligned
![Page 12: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/12.jpg)
Numerical Stability
• Rounding errors may accumulate and propagate unstably in a bad algorithm.
• Can be proven that for Gaussian elimination the accumulated error is bounded
![Page 13: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/13.jpg)
Summary on Error Mechanisms for GE
• Rounding: due to machine finite precision we have an error in the solution even if the algorithm is perfect– Pivoting helps to reduce it
• Matrix conditioning– If matrix is “good”, then complete pivoting solves any round-
off problem– If matrix is “bad” (almost singular), then there is nothing to
do
• Numerical stability– How rounding errors accumulate– GE is stable
![Page 14: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/14.jpg)
LU – Computational Complexity
• Computational Complexity– O(n3), where M: n x n
• We cannot afford this complexity
• Exploit natural sparsity that occurs in circuits equations– Sparsity: many zero elements– Matrix is sparse when it is advantageous to exploit
its sparsity
• Exploiting sparsity: O(n1.1) to O(n1.5)
![Page 15: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/15.jpg)
LU – Goals of exploiting sparsity
(1) Avoid storing zero entries– Memory usage reduction– Decomposition is faster since you do need to access
them (but more complicated data structure)
(2) Avoid trivial operations– Multiplication by zero– Addition with zero
(3) Avoid losing sparsity
![Page 16: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/16.jpg)
1 2 3 4 1m m
X X
X X X
X X X
X X X
X X X
X X X
X X X
X X
m
Sparse Matrices – Resistor Line
Tridiagonal Case
![Page 17: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/17.jpg)
For i = 1 to n-1 { “For each Row”
For j = i+1 to n { “For each target Row below the source”
For k = i+1 to n { “For each Row element beyond Pivot”
} }}
Pivotjiji
ii
MM
M
Multiplier
jk jk ji ikM M M M
Order N Operations!
GE Algorithm – Tridiagonal Example
![Page 18: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/18.jpg)
1R
5R
3R
4R
2R1V 2V
3V
1Si
1
1
R 2
1
R
2
1
R
2
1
R
2
1
R 3
1
R
4
1
R
4
1
R
4
1
R
4
1
R 5
1
R
SymmetricDiagonally Dominant
Nodal Matrix0
Sparse Matrices – Fill-in – Example 1
13
2
1
0
0
sie
e
e
![Page 19: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/19.jpg)
X X X
X X 0
X 0 X
X X X
X X 0
X 0 X
X
X
X X
X= Non zero
Matrix Non zero structure Matrix after one LU step
X X
Sparse Matrices – Fill-in – Example 1
![Page 20: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/20.jpg)
X X X X
X X 0 0
0 X X 0
X 0 00
Fill-ins Propagate
XX
X
X
X
X X
X
X X
Fill-ins from Step 1 result in Fill-ins in step 2
Sparse Matrices – Fill-in – Example 2
![Page 21: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/21.jpg)
3V
Node Reordering Can Reduce Fill-in - Preserves Properties (Symmetry, Diagonal Dominance) - Equivalent to swapping rows and columns
1V 2V
0
x x x
x x x
x x x
Fill-ins
2V 1V
3V
0
x x 0
x x x
0 x x
No Fill-ins
Sparse Matrices – Fill-in & Reordering
![Page 22: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/22.jpg)
Exploiting and maintaining sparsity• Criteria for exploiting sparsity:
– Minimum number of ops– Minimum number of fill-ins
• Pivoting to maintain sparsity: NP-complete problem heuristics are used– Markowitz, Berry, Hsieh and Ghausi, Nakhla and Singhal and
Vlach– Choice: Markowitz
• 5% more fill-ins but• Faster
• Pivoting for accuracy may conflict with pivoting for sparsity
![Page 23: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/23.jpg)
Where can fill-in occur ?
x x x
x x x
x x x
Multipliers
Already Factored
Possible Fill-inLocations
Fill-in Estimate = (Non zeros in unfactored part of Row -1) (Non zeros in unfactored part of Col -1) Markowitz product
Sparse Matrices – Fill-in & Reordering
![Page 24: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/24.jpg)
Markowitz Reordering (Diagonal Pivoting)
tor oF 1i n diagonal with min Markowitz PFind roductj i
rows and columns Swa p ij j i
the new row and determine fiFactor ll-insi
End
Greedy Algorithm (but close to optimal) !
Sparse Matrices – Fill-in & Reordering
![Page 25: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/25.jpg)
Why only try diagonals ?
• Corresponds to node reordering in Nodal formulation
03
1 2
02
3 1
• Reduces search cost • Preserves Matrix Properties - Diagonal Dominance - Symmetry
Sparse Matrices – Fill-in & Reordering
![Page 26: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/26.jpg)
Pattern of a Filled-in Matrix
Very Sparse
Very Sparse
Dense
Sparse Matrices – Fill-in & Reordering
![Page 27: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/27.jpg)
Sparse Matrices – Fill-in & Reordering
Unfactored random Matrix
![Page 28: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/28.jpg)
Sparse Matrices – Fill-in & Reordering
Factored random Matrix
![Page 29: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/29.jpg)
Sparse Matrices – Data Structure
• Several ways of storing a sparse matrix in a compact form
• Trade-off– Storage amount– Cost of data accessing and update procedures
• Efficient data structure: linked list
![Page 30: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/30.jpg)
Sparse Matrices – Data Structure 1
Orthogonal linked list
![Page 31: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/31.jpg)
Val 11
Col 11
Val 12
Col 12
Val 1K
Col 1K
Val 21
Col 21
Val 22
Col 22
Val 2L
Col 2L
Val N1
Col N1
Val N2
Col N2
Val Nj
Col Nj
1
N
Arrays of Data in a RowVector of row pointers
Matrix entriesColumn index
Sparse Matrices – Data Structure 2
![Page 32: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/32.jpg)
, 1 , 4 , 5 , 7 , 9 , 12 , 15
1 4 5 7 9 12 15i i i i i i i i i i i i i iM M M M M M M
i i i i i i i
, 1 , 7 , 15
1 7 15i i i i i iM M M
i i i
Eliminating Source Row i from Target row j
Row i
Row j
Must read all the row j entries to find the 3 that match row iEvery Miss is an unneeded memory reference (expensive!!!)
Could have more misses than ops!
Sparse Matrices – Data StructureProblem of Misses
![Page 33: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/33.jpg)
, 1 , 4 , 5 , 7 , 9 , 12 , 15
1 4 5 7 9 12 15i i i i i i i i i i i i i iM M M M M M M
i i i i i i i
Row j
, 1i iM , 4i iM , 5i iM , 7i iM , 9i iM , 12i iM , 15i iM
1) Read all the elements in Row j, and scatter them in an n-length vector2) Access only the needed elements using array indexing!
Sparse Matrices – Data Structure
Scattering for Miss Avoidance
![Page 34: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/34.jpg)
X
X
X
X
X
X
X X
X
XX
X
X
X X 1
2
4
3
5
• One Node Per Matrix Row
• One Edge Per Off-diagonal Pair
X
X
Structurally Symmetric Matrices and Graphs
Sparse Matrices – Graph Approach
![Page 35: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/35.jpg)
X
X
X
X
X
X
X X
X
XX
X
X
X X
X
X 1
2
4
3
5
Markowitz Products = (Node Degree)2
2(degree 1) 92(deg ree 2) 42(deg ree 3) 9
2(degree 4) 42(degree 5) 4
Sparse Matrices – Graph ApproachMarkowitz Products
93311 M42222 M93333 M42244 M42255 M
![Page 36: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/36.jpg)
X
X
X
X
X
X
X X
X
XX
X
X
X X 1
2
4
3
5
• Delete the node associated with pivot row
• “Tie together” the graph edges
X
X
X
X
X
One Step of LU Factorization
Sparse Matrices – Graph ApproachFactorization
![Page 37: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/37.jpg)
x x x x
x x x
x x x x
x x x x
x x x x
1 2 3 4 5
Graph
Markowitz products( = Node degree)
Col Row
3 3 9
2 2 4
3 3 9
3 3 9
3 3 9
1
2
3
4
5
Sparse Matrices – Graph ApproachExample
![Page 38: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/38.jpg)
x x x
x x x x x
x x x x x
x x x x
x x x x
1 3 4 5
Graph
Swap 2 with 1
Sparse Matrices – Graph ApproachExample
![Page 39: Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.](https://reader036.fdocuments.in/reader036/viewer/2022062409/5697bf951a28abf838c91123/html5/thumbnails/39.jpg)
• Gaussian Elimination Error Mechanisms– Ill-conditioning
– Numerical Stability
• Gaussian Elimination for Sparse Matrices – Improved computational cost: factor in O(N1.5)
operations (dense is O(N3) )
– Example: Tridiagonal Matrix Factorization O(N)
– Data structure
– Markowitz Reordering to minimize fill-ins
– Graph Based Approach
Summary