Preconditioned Iterative Linear Solvers for Unstructured Grids on the Earth Simulator
07 Linear solvers - graphics.stanford.edu
Transcript of 07 Linear solvers - graphics.stanford.edu
![Page 1: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/1.jpg)
(Sparse) Linear Solvers
A BA BAx = BAx = B
![Page 2: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/2.jpg)
Why?Why?Many geometry processing applications boil down to:
solve one or more linear systemssolve one or more linear systems
Parameterization
Editing
Reconstruction
Fairing
Morphing
2
![Page 3: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/3.jpg)
Don’t you just invert A?
• Matrix not always invertible
Don t you just invert A?
at ot a ays e t b e– Not square
A x b A x b
Over determined Under determined
– Singular
Over determined Under determined
• Almost singular
3
![Page 4: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/4.jpg)
Don’t you just invert A?Don t you just invert A?
• Even if invertibleV i O( 3)– Very expensive O(n3)
– Usually n = number of vertices/faces1– If A is sparse, A-1 is dense
4
![Page 5: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/5.jpg)
Problem definition
• Input
Problem definition
– Matrix Amxn– Vector Bmx1
• Output– Vector xnx1– Such that Ax = B
• Small time and memory complexity• Small time and memory complexity
• Use additional information on the problem p
5
![Page 6: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/6.jpg)
Properties of linear systems for DGPProperties of linear systems for DGP
• Sparse Ap– Equations depend on
graph neighborhood
– Equations on vertices 77 non-zeros per
row on average
– Number of non zero elements O(n) n = 1002( )
Non zeros = 7002
6
![Page 7: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/7.jpg)
Properties of linear systems for DGPProperties of linear systems for DGP
• Symmetric positive definite (positive y p (peigenvalues)– Many times A is the Laplacian matrix– Laplacian systems are usually SPD
• A fixed, b changes – many right hand sides– Interactive applications– Mesh geometry – same matrix for X, Y, Z
7
![Page 8: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/8.jpg)
Linear solvers zooLinear solvers zoo
• A is square and regular
• Indirect solvers – iterative– JacobiJacob– Gauss-Seidel– Conjugate gradient
• Direct solvers – factorization– LU– QRQR– Cholesky
• Multigrid solvers• Multigrid solvers
8
![Page 9: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/9.jpg)
Jacobi iterative solverJacobi iterative solver
• If all variables are known but one, its valueIf all variables are known but one, its value is easy to find
• Idea:• Idea:– Guess initial values– Repeat until convergenceRepeat until convergence
• Compute the value of one variable assuming all others are known
• Repeat for all variables
9
![Page 10: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/10.jpg)
Jacobi iterative solverJacobi iterative solver
• Let x* be the exact solution of Ax* = bLet x be the exact solution of Ax b* *1i i ij j
j iii
x b a xa ≠
⎛ ⎞= −⎜ ⎟
⎝ ⎠∑
• Jacobi method
j iii ≠⎝ ⎠
– Set
– While not convergedValues from
previous iteration• For i = 1 to n
( 1) ( )1k kx b a x+ ⎛ ⎞⎜ ⎟∑
previous iteration
10
( ) ( )i i ij j
j iii
x b a xa ≠
= −⎜ ⎟⎝ ⎠
∑
![Page 11: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/11.jpg)
Jacobi iterative solverPros
Si l t i l t• Simple to implement– No need for sparse data structure
• Low memory consumption O(n)
• Takes advantage of sparse structureTakes advantage of sparse structure
b ll li d• Can be parallelized11
![Page 12: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/12.jpg)
Jacobi iterative solverCons
• Guaranteed convergence for strictly diagonally g y g ydominant matrices
– The Laplacian is almost such a matrix
• Converges SLOWLY• Converges SLOWLY– Smoothens the error– Once the error is smooth, slow convergence
• Doesn’t take advantage of– Same A, different b,– SPD matrix
12
![Page 13: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/13.jpg)
Direct solvers - factorizationDirect solvers factorization
• If A is diagonal/triangular the system is easy to solveIf A is diagonal/triangular the system is easy to solve
• Idea:• Idea:
– Factor A using “simple” matrices
x1
– Solve using k easy systems
13
![Page 14: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/14.jpg)
Direct solvers - factorizationDirect solvers factorization
• Factoring harder than solving• Factoring harder than solving
• Added benefit – multiple right hand sides
– Factor only once!Factor only once!
14
![Page 15: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/15.jpg)
Solving easy matrices1
i i ij jj iii
x b a xa ≠
⎛ ⎞= −⎜ ⎟
⎝ ⎠∑
Solving easy matrices
DiagonalDiagonal
15
![Page 16: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/16.jpg)
Solving easy matrices1
i i ij jj iii
x b a xa ≠
⎛ ⎞= −⎜ ⎟
⎝ ⎠∑
Solving easy matrices
Lower triangularLower triangular
• Forward substitution• Start from x• Start from x1
16
![Page 17: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/17.jpg)
Solving easy matrices1
i i ij jj iii
x b a xa ≠
⎛ ⎞= −⎜ ⎟
⎝ ⎠∑
Solving easy matrices
Upper triangularUpper triangular
• Backward substitution• Start from x• Start from xn
17
![Page 18: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/18.jpg)
LU factorizationLU factorization
• A = LUU– L lower triangular– U upper triangular
• Exists for any non-singular square matrix
• Solve using Lx1=b and Ux = x118
![Page 19: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/19.jpg)
QR factorizationQR factorization
• A = QRA QR– Q orthogonal QT = Q -1– R upper triangular
• Exists for any matrix
• Solve using Rx = QTb19
![Page 20: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/20.jpg)
Cholesky factorizationCholesky factorization
• A = LLT
– L lower triangular
• Exists for square symmetric positive definiteExists for square symmetric positive definite matrices
• Solve using Lx1=b and LTx = x120
![Page 21: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/21.jpg)
Direct solvers – sparse factors?Direct solvers sparse factors?
• Usually factors are not sparse even if A isUsually factors are not sparse, even if A is sparse
Sparse matrix Cholesk facto
21
Sparse matrix Cholesky factor
![Page 22: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/22.jpg)
Sparse direct solversSparse direct solvers
• Reorder variables/equations so factors are sparseReorder variables/equations so factors are sparse
• For exampleBandwidth is preserved Reorder to minimize– Bandwidth is preserved Reorder to minimize badnwidth
22Sparse matrix
bounded bandwithCholesky factor
![Page 23: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/23.jpg)
Sparse direct solversSparse direct solvers• SPD matrices
– Cholesky factor sparsity pattern can be derived from matrix’ sparsity pattern
• Reorder to minimize new non zeros (fill in) of factor matrixReorder to minimize new non zeros (fill in) of factor matrix
23Sparse matrix - reordered Cholesky factor
![Page 24: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/24.jpg)
Sparse CholeskySparse Cholesky
• Symbolic factorization– Can be reused for matrix with same sparsity structure
• Even if values change!
O l f SPD i• Only for SPD matrices
• Reordering is heuristic – not always gives good sparsityg y g g p y– High memory complexity if reordering was bad
• BUT• BUT– Works extremely well for Laplacian systems– Can solve systems up to n = 500K on a standard PC
24
![Page 25: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/25.jpg)
Under-determined systemsUnder determined systems• System is under-constrained too many variables
• Solution: add constraints by pinning variables• Solution: add constraints by pinning variables– Add equations x1 = c1 , x2 = c2 , ... , xk = c2
OrOr
– Replace variables with constants in equations• Better smaller system• Better – smaller system
25
![Page 26: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/26.jpg)
Over-determined systemsOver determined systems
• System is over-constrained too many equations
• Unless equations are dependent, no solution exists
• Instead, minimize: 26
![Page 27: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/27.jpg)
Over-determined systemsOver determined systems
• The minimizer of
• Is the solution of
• ATA is symmetric positive definite– Can use CholeskyCan use Cholesky
27
![Page 28: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/28.jpg)
Singular square matricesSingular square matrices
• Equivalent to under-determined systemEquivalent to under determined system
dd i• Add constraints– By changing variables to constants
– NOT by adding equations!• Will make system not square
• Much more expensive
28
![Page 29: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/29.jpg)
ConclusionsConclusions
• Many linear solvers existy
• Best choice depends on A
• DGP algorithms generate sparse SPD systems
• Research shows sparse linear solvers are best for DGP
• Sparse algs exist also for non-square systems
• If solving in Matlab use “\” operator• If solving in Matlab – use “\” operator
29
![Page 30: 07 Linear solvers - graphics.stanford.edu](https://reader036.fdocuments.in/reader036/viewer/2022070815/62c65cda9e6dab4b0d50e4e9/html5/thumbnails/30.jpg)
ReferencesReferences
• “Efficient Linear System Solvers for MeshEfficient Linear System Solvers for Mesh Processing”, Botsch et al., 2005
30