Krylov subspace methods: a versatile tool in Scientic Computing V. Simoncini...

Post on 26-Mar-2021

2 views 0 download

Transcript of Krylov subspace methods: a versatile tool in Scientic Computing V. Simoncini...

0.5setgray0

0.5setgray1

Krylov subspace methods:

a versatile tool in Scientific Computing

V. Simoncini

Dipartimento di Matematica

Universita di Bolognavaleria@dm.unibo.it

http://www.dm.unibo.it/˜simoncin

Krylov subspace methods – p. 1

The framework

Given the large n× n linear system

Ax = b

Find xm such that xm ≈ x

x0 initial guess r0 = b−Ax0 (if no info, take x0 = 0)

Krylov subspace approximation: xm = x0 + zm

zm ∈ Km(A, r0) := span{r0, Ar0, A2r0, . . . , A

m−1r0}

? Projection onto a much smaller space m¿ n

Krylov subspace methods – p. 2

The framework

Given the large n× n linear system

Ax = b

Find xm such that xm ≈ x

x0 initial guess r0 = b−Ax0 (if no info, take x0 = 0)

Krylov subspace approximation: xm = x0 + zm

zm ∈ Km(A, r0) := span{r0, Ar0, A2r0, . . . , A

m−1r0}

? Projection onto a much smaller space m¿ n

Krylov subspace methods – p. 2

Basic Idea of Projection

Assume x0 = 0.

Let {v1, . . . , vm} be a basis of

Km(A, r0) = span{r0, Ar0, A2r0, . . . , A

m−1r0}

and Vm := [v1, . . . , vm]

Thenxm = Vmym ym ∈ R

m

ym coefficients of linear combination

Krylov subspace methods – p. 3

Some popular Krylov subspace methods

rm = b−Axm = b−AVmym

Mostly theoretical (for nonsymmetric A):

GMRES (Generalized Minimum RESidual)

ym : miny∈Rm

‖rm‖2

FOM (Full Orthogonalization Method)

ym : rm ⊥ Km

Note:for A symmetric pos. def., FOM becomes CG (Conjugate Gradients)

Krylov subspace methods – p. 4

Some popular Krylov subspace methods

More Practical (for nonsymmetric A):

GMRES(m): Restarted GMRES

FOM(m): Restarted FOM (far less popular)

BiCGStab(`): short-term recurrence

Restarted Procedure: Given x0, r0

do until convergence∗ Run m steps of “Method” to get xm

∗ Compute rm = b−Axm

∗ Set x0 ← xm, r0 ← rm

Characteristics:

? Economy-versions

? “Good” properties are lost or preserved only locally

Krylov subspace methods – p. 5

Some popular Krylov subspace methods

More Practical (for nonsymmetric A):

GMRES(m): Restarted GMRES

FOM(m): Restarted FOM (far less popular)

BiCGStab(`): short-term recurrence

Restarted Procedure: Given x0, r0

do until convergence∗ Run m steps of “Method” to get xm

∗ Compute rm = b−Axm

∗ Set x0 ← xm, r0 ← rm

Characteristics:

? Economy-versions

? “Good” properties are lost or preserved only locally

Krylov subspace methods – p. 5

Some popular Krylov subspace methods

More Practical (for nonsymmetric A):

GMRES(m): Restarted GMRES

FOM(m): Restarted FOM (far less popular)

BiCGStab(`): short-term recurrence

Restarted Procedure: Given x0, r0

do until convergence∗ Run m steps of “Method” to get xm

∗ Compute rm = b−Axm

∗ Set x0 ← xm, r0 ← rm

Characteristics:

? Economy-versions

? “Good” properties are lost or preserved only locally

Krylov subspace methods – p. 5

Outline

Application-driven practical issues:

Basic considerations on restarted methods

“Quasi-optimal” methods

Indefinite inner products

Inexact methods

Krylov subspace methods. a versatile Tool for complex problems:

many requirements may be relaxed

Krylov subspace methods – p. 6

Outline

Application-driven practical issues:

Basic considerations on restarted methods

“Quasi-optimal” methods

Indefinite inner products

Inexact methods

Krylov subspace methods. a versatile Tool for complex problems:

many requirements may be relaxed

Krylov subspace methods – p. 6

Outline

Application-driven practical issues:

Basic considerations on restarted methods

“Quasi-optimal” methods

Indefinite inner products

Inexact methods

Krylov subspace methods. a versatile Tool for complex problems:

many requirements may be relaxed

Krylov subspace methods – p. 6

Outline

Application-driven practical issues:

Basic considerations on restarted methods

“Quasi-optimal” methods

Indefinite inner products

Inexact methods

Krylov subspace methods. a versatile Tool for complex problems:

many requirements may be relaxed

Krylov subspace methods – p. 6

Outline

Application-driven practical issues:

Basic considerations on restarted methods

“Quasi-optimal” methods

Indefinite inner products

Inexact methods

Krylov subspace methods. a versatile Tool for complex problems:

many requirements may be relaxed

Krylov subspace methods – p. 6

Outline

Application-driven practical issues:

Basic considerations on restarted methods

“Quasi-optimal” methods

Indefinite inner products

Inexact methods

Krylov subspace methods. a versatile Tool for complex problems:

many requirements may be relaxed

Krylov subspace methods – p. 6

Restarted Methods

Convergence strongly depends on choice of m ...

0 100 200 300 400 500 600 700 80010−10

10−8

10−6

10−4

10−2

100

102

GMRES(15)

Number of Matrix−vector multiplies

norm

of re

lative r

esid

ual

GMRES(20)

Krylov subspace methods – p. 7

Restarted Methods

Convergence strongly depends on choice of m ...

0 100 200 300 400 500 600 700 80010−10

10−8

10−6

10−4

10−2

100

102

GMRES(15)

Number of Matrix−vector multiplies

norm

of re

lative r

esid

ual

GMRES(20)

Krylov subspace methods – p. 7

Restarted Methods

Convergence strongly depends on choice of m ... true?

0 100 200 300 400 500 600 700 80010−10

10−8

10−6

10−4

10−2

100

102

GMRES(15)

FOM(15)

Global subspace generated

norm

of re

lative r

esid

ual

GMRES(20)

Krylov subspace methods – p. 8

Restarted Methods

Convergence strongly depends on choice of m

0 50 100 150 200 250 300 35010−9

10−8

10−7

10−6

10−5

10−4

10−3

10−2

10−1

100

GMRES(10)

FOM(10)

Number of Matrix−vector multiplies

norm

of re

lative r

esid

ual

Krylov subspace methods – p. 9

Restarted Methods

Switch to FOM residual vector at the very first restart

0 50 100 150 200 250 300 350 400 45010−10

10−8

10−6

10−4

10−2

100

102

Number of Matrix−vector multiplies

no

rm o

f re

lative

re

sid

ua

l

GMRES(15)

GMRES(15)+FOM

Pictures from Simoncini, SIMAX 2000.

Krylov subspace methods – p. 10

Outline

Application-driven practical issues:

Basic considerations on restarted methods

“Quasi-optimal” methodsIndefinite inner products

Inexact methods

Krylov subspace methods – p. 11

Enhanced Restarted Methods

Warning: Large m not always means faster convergence

Current research:

Deflated Methods (originally used for A s.p.d.)Mansfield, Nicolaides, Erhel etal., Saad etal., Nabben, Vuik, ...

Augmented Methods ( information saved from previous restarts)De Sturler, Morgan, Sorensen, Baglama etal, Baker etal, ...

See also Eiermann, Ernst, Schneider (JCAM 2000)

Truncated Methods (only local information maintained)Golub, Ye, Notay, Szyld, ...

Krylov subspace methods – p. 12

Enhanced Restarted Methods

Warning: Large m not always means faster convergence

Current research:

Deflated Methods (originally used for A s.p.d.)Mansfield, Nicolaides, Erhel etal., Saad etal., Nabben, Vuik, ...

Augmented Methods ( information saved from previous restarts)De Sturler, Morgan, Sorensen, Baglama etal, Baker etal, ...

See also Eiermann, Ernst, Schneider (JCAM 2000)

Truncated Methods (only local information maintained)Golub, Ye, Notay, Szyld, ...

Krylov subspace methods – p. 12

Enhanced Restarted Methods

Warning: Large m not always means faster convergence

Current research:

Deflated Methods (originally used for A s.p.d.)Mansfield, Nicolaides, Erhel etal., Saad etal., Nabben, Vuik, ...

Augmented Methods ( information saved from previous restarts)De Sturler, Morgan, Sorensen, Baglama etal, Baker etal, ...

See also Eiermann, Ernst, Schneider (JCAM 2000)

Truncated Methods (only local information maintained)Golub, Ye, Notay, Szyld, ...

Krylov subspace methods – p. 12

Enhanced Restarted Methods

Warning: Large m not always means faster convergence

Current research:

Deflated Methods (originally used for A s.p.d.)Mansfield, Nicolaides, Erhel etal., Saad etal., Nabben, Vuik, ...

Augmented Methods ( information saved from previous restarts)De Sturler, Morgan, Sorensen, Baglama etal, Baker etal, ...

See also Eiermann, Ernst, Schneider (JCAM 2000)

Truncated Methods (only local information maintained)Golub, Ye, Notay, Szyld, ...

Krylov subspace methods – p. 12

De Sturler’s method

Tricky way to enhance approximation space

0 200 400 600 800 1000 120010−10

10−9

10−8

10−7

10−6

10−5

10−4

10−3

10−2

10−1

100

Iteration index

Rela

tive r

esid

ual norm

GMRES(50)

GMRES(150)

Krylov subspace methods – p. 13

De Sturler’s method

Trickly way to enhance approximation space

0 200 400 600 800 1000 120010−10

10−9

10−8

10−7

10−6

10−5

10−4

10−3

10−2

10−1

100

Iteration index

Rela

tive r

esid

ual norm

GMRES(50)

GMRES(150)

GCROT(7,23,23,4,1,0)

Code: courtesy of Oliver Ernst.

Krylov subspace methods – p. 14

Truncated methods and “Quasi-optimality”. I

? A truncated method discards “older” vectors

{v1, v2, . . . , vm−k, vm−k+1, . . . , vm︸ ︷︷ ︸orthogonal

, vm+1, . . . , }

(local optimality properties)

Limited memory requirements

Optimality is lost

1553.gif How “old” is old?

Krylov subspace methods – p. 15

Truncated methods and “Quasi-optimality’. II

Example: A is non-normal, spectrum on circle |1− z| = 0.5

0 5 10 15 20 25 30 3510−9

10−8

10−7

10−6

10−5

10−4

10−3

10−2

10−1

100

number of iterations

norm

of re

sid

ual

k=5

k=10

k=∞

0 10 20 30 40 50 60 70 8010−9

10−8

10−7

10−6

10−5

10−4

10−3

10−2

10−1

100

101

number of iterations

norm

of

resid

ual

k = ∞

k = 30

k = 20

k = 10

Low eigenvector cond. number

high eigenvector cond. number

Krylov subspace methods – p. 16

Truncated methods and “Quasi-optimality’. II

Example: A is non-normal, spectrum on circle |1− z| = 0.5

0 5 10 15 20 25 30 3510−9

10−8

10−7

10−6

10−5

10−4

10−3

10−2

10−1

100

number of iterations

norm

of re

sid

ual

k=5

k=10

k=∞

0 10 20 30 40 50 60 70 8010−9

10−8

10−7

10−6

10−5

10−4

10−3

10−2

10−1

100

101

number of iterations

norm

of re

sid

ual

k = ∞

k = 30

k = 20

k = 10

Low eigenvector cond. numberhigh eigenvector cond. number

Krylov subspace methods – p. 16

Truncated methods and “Quasi-optimality’. III

? If A is nonsymmetric, but harmless modification of a symmetric matrixthen short truncation suffices

Ax = b A symmetric ⇒ P−1Ax = P−1b

P−1v = L−T L−1v + ε1, ε = 10−5, L Incomplete Cholesky of A

0 2 4 6 8 10 12 14 16 18 20

10−10

10−8

10−6

10−4

10−2

100

number of iterations

norm

of re

sid

ual

PCG

k = ∞ k = 2, 3

Krylov subspace methods – p. 17

Outline

Application-driven practical issues:

Basic considerations on restarted methods

“Quasi-optimal” methods

Indefinite inner productsInexact methods

Krylov subspace methods – p. 18

Need for a “different” inner product ?

Typical orthogonality: rm ⊥ Km

Common alternative:

? Given M Hermitian and positive definite,

rm ⊥M Km

i.e., for Range(Vm)=Km it holds V ∗

mMrm = 0

may lead to minimization of ‖rm‖M or ‖em‖M

♣ In many cases, use of M−1

2 AM−1

2 hpd

What about different alternatives?

Krylov subspace methods – p. 19

Need for a “different” inner product ?

Typical orthogonality: rm ⊥ Km

Common alternative:

? Given M Hermitian and positive definite,

rm ⊥M Km

i.e., for Range(Vm)=Km it holds V ∗

mMrm = 0

may lead to minimization of ‖rm‖M or ‖em‖M

♣ In many cases, use of M−1

2 AM−1

2 hpd

What about different alternatives?

Krylov subspace methods – p. 19

Motivations for an indefinite inner product

Exploit inherent properties of the problem. For instance,

A complex symmetric

Exploit matrix structure

A =

(H B

BT 0

)

(or, say, A Hamiltonian)

... to gain in efficiency with (hopefully) no loss in reliability

Krylov subspace methods – p. 20

Motivations for an indefinite inner product

Exploit inherent properties of the problem. For instance,

A complex symmetric

Exploit matrix structure

A =

(H B

BT 0

)

(or, say, A Hamiltonian)

... to gain in efficiency with (hopefully) no loss in reliability

Krylov subspace methods – p. 20

Motivations for an indefinite inner product

Exploit inherent properties of the problem. For instance,

A complex symmetric

Exploit matrix structure

A =

(H B

BT 0

)

(or, say, A Hamiltonian)

... to gain in efficiency with (hopefully) no loss in reliability

Krylov subspace methods – p. 20

An example. Indefinite (Constraint) Preconditioner

Ax = b A =

(H B

BT 0

), H = HT , H ≥ 0

Preconditioning: AP−1x = b

Block indefinite Preconditioner:

P =

(H B

BT 0

)H ≈ H

* AP−1 not symmetrizable!* However: AP−1 is P−1-Hermitian (Hermitian wrto P−1)

⇒ Cheap short-term recurrence(Simplified Lanczos - Freund & Nachtigal ’95)

Krylov subspace methods – p. 21

An example. Indefinite (Constraint) Preconditioner

Ax = b A =

(H B

BT 0

), H = HT , H ≥ 0

Preconditioning: AP−1x = b

Block indefinite Preconditioner:

P =

(H B

BT 0

)H ≈ H

* AP−1 not symmetrizable!* However: AP−1 is P−1-Hermitian (Hermitian wrto P−1)

⇒ Cheap short-term recurrence(Simplified Lanczos - Freund & Nachtigal ’95)

Krylov subspace methods – p. 21

An example. Indefinite (Constraint) Preconditioner

Ax = b A =

(H B

BT 0

), H = HT , H ≥ 0

Preconditioning: AP−1x = b

Block indefinite Preconditioner:

P =

(H B

BT 0

)H ≈ H

* AP−1 not symmetrizable!* However: AP−1 is P−1-Hermitian (Hermitian wrto P−1)

⇒ Cheap short-term recurrence(Simplified Lanczos - Freund & Nachtigal ’95)

Krylov subspace methods – p. 21

Preconditioner Performance

P−1 =

(H B

BT 0

)−1

=

(I −B

O I

) (I O

O −(BTB)−1

) (I O

−BT I

)

( eH = I if prescaling used)

3D Magnetostatic problem. Number of iterations

size QMR QMR(Pdef ) QMR(P )1119 2368 40 152208 2825 36 134371 5191 43 178622 >10000 49 16

22675 >10000 81 25

Pdef = diag (I, BT B) hpd

In practice: BT B ≈ S Incomplete Cholesky fact. ⇒ bP

Krylov subspace methods – p. 22

Preconditioner Performance

P−1 =

(H B

BT 0

)−1

=

(I −B

O I

) (I O

O −(BTB)−1

) (I O

−BT I

)

( eH = I if prescaling used)

3D Magnetostatic problem. Number of iterations

size QMR QMR(Pdef ) QMR(P )1119 2368 40 152208 2825 36 134371 5191 43 178622 >10000 49 16

22675 >10000 81 25

Pdef = diag (I, BT B) hpd

In practice: BT B ≈ S Incomplete Cholesky fact. ⇒ bP

Krylov subspace methods – p. 22

Preconditioner Performance

P−1 =

(H B

BT 0

)−1

=

(I −B

O I

) (I O

O −(BTB)−1

) (I O

−BT I

)

( eH = I if prescaling used)

3D Magnetostatic problem. Number of iterations

size QMR QMR(Pdef ) QMR(P )1119 2368 40 152208 2825 36 134371 5191 43 178622 >10000 49 16

22675 >10000 81 25

Pdef = diag (I, BT B) hpd

In practice: BT B ≈ S Incomplete Cholesky fact. ⇒ bP

Krylov subspace methods – p. 22

Outline

Application-driven practical issues:

Basic considerations on restarted methods

“Quasi-optimal” methods

Indefinite inner products

Inexact methods

Krylov subspace methods – p. 23

Inexact methods

It is given an operator v → Aε(v).

Efficiently solve the given problem in the approximation space

Km = span{v,Aε1(v),Aε2(Aε1(v)), . . .}, v ∈ Cn

with dim(Km) = m, where Aε → A for ε→ 0 (ε may be tuned)

? for A = A, ε = 0⇒ Km = span{v, Av, A2v, . . . , Am−1v}

? Analysis also possible for eigenproblem

Krylov subspace methods – p. 24

Inexact methods

It is given an operator v → Aε(v).

Efficiently solve the given problem in the approximation space

Km = span{v,Aε1(v),Aε2(Aε1(v)), . . .}, v ∈ Cn

with dim(Km) = m, where Aε → A for ε→ 0 (ε may be tuned)

? for A = A, ε = 0⇒ Km = span{v, Av, A2v, . . . , Am−1v}

? Analysis also possible for eigenproblem

Krylov subspace methods – p. 24

Some typical situations

A(v) function (linear in v):

A result of a complex functional application

Schur complement: A = BT S−1B S expensive to invert

Flexible preconditioned system: AP−1x = b, where

P−1vi ≈ P−1i vi

etc.

In the eigenvalue context: shift-and-invert strategy

Ax = λMx A(v) = (A− σM)−1v

Krylov subspace methods – p. 25

Some typical situations

A(v) function (linear in v):

A result of a complex functional application

Schur complement: A = BT S−1B S expensive to invert

Flexible preconditioned system: AP−1x = b, where

P−1vi ≈ P−1i vi

etc.

In the eigenvalue context: shift-and-invert strategy

Ax = λMx A(v) = (A− σM)−1v

Krylov subspace methods – p. 25

Questions

? Do we need to have ε small to get good approximation?

good approximation: ‖rm‖ ≤ ε0 (fixed tolerance)

? Do we need to have ε fixed throughout?

? Do we still converge to a meaningful solution if ε varies?

? What happens to convergence rate when ε varies?

Krylov subspace methods – p. 26

Questions

? Do we need to have ε small to get good approximation?

good approximation: ‖rm‖ ≤ ε0 (fixed tolerance)

? Do we need to have ε fixed throughout?

? Do we still converge to a meaningful solution if ε varies?

? What happens to convergence rate when ε varies?

Krylov subspace methods – p. 26

Assuming A is exact...

Km Krylov subspace Vm = [v1, . . . , vm] orthogonal basis

Arnoldi relation:

AVm = VmHm + vm+1hm+1,meTm = Vm+1Hm

with v = Vme1‖v‖

Krylov subspace methods – p. 27

Working with an inaccurate A

A = A → Aε(v) = Av + f

AVm = Vm+1Hm + Fm︸︷︷︸[f1,f2,...,fm]

Fm error matrix, ‖fj‖ = O(εj)

——————————

How large is Fm allowed to be?

xm = Vmym

rm = b−AVmym = b− Vm+1Hmym − Fmym

= Vm+1(e1β −Hmym)︸ ︷︷ ︸computed residual =:rm

−Fmym

where Fmym =

m∑

i=1

fi(ym)i

Krylov subspace methods – p. 28

Working with an inaccurate A

A = A → Aε(v) = Av + f

AVm = Vm+1Hm + Fm︸︷︷︸[f1,f2,...,fm]

Fm error matrix, ‖fj‖ = O(εj)

——————————

How large is Fm allowed to be? xm = Vmym

rm = b−AVmym = b− Vm+1Hmym − Fmym

= Vm+1(e1β −Hmym)︸ ︷︷ ︸computed residual =:rm

−Fmym

where Fmym =

m∑

i=1

fi(ym)i

Krylov subspace methods – p. 28

Relaxed methods

Fmym =

m∑

i=1

fi(ym)i

In fact, for several methods there exists `m such that

| (ym)i | ≤ `m‖ri−1‖

Therefore, ‖fi‖ is allowed to be large!

More precisely,

If ‖fi‖ ≤`m

m

1

‖ri−1‖ε i = 1, . . . , m

then ‖Fmym‖ ≤ ε ⇒ ‖rm − rm‖ ≤ ε

Bouras, Frayssè, Giraud, Simoncini, Szyld, Sleijpen, Van den Eshof, Gratton ...

Krylov subspace methods – p. 29

Relaxed methods

Fmym =

m∑

i=1

fi(ym)i

In fact, for several methods there exists `m such that

| (ym)i | ≤ `m‖ri−1‖

Therefore, ‖fi‖ is allowed to be large!

More precisely,

If ‖fi‖ ≤`m

m

1

‖ri−1‖ε i = 1, . . . , m

then ‖Fmym‖ ≤ ε ⇒ ‖rm − rm‖ ≤ ε

Bouras, Frayssè, Giraud, Simoncini, Szyld, Sleijpen, Van den Eshof, Gratton ...

Krylov subspace methods – p. 29

Numerical experiment: Schur complement

BT S−1B︸ ︷︷ ︸A

x = b at each it. i solve Swi = Bvi

Inexact FOM

δm = ‖rm − (b − Vm+1Hm

ym)‖

0 20 40 60 80 100 12010−8

10−7

10−6

10−5

10−4

10−3

10−2

10−1

100

101

number of iterations

magnitude

δm

||rm

||

||rm

||

εinner

~

Krylov subspace methods – p. 30

Eigenproblem

Inverted Arnoldi: Ax = λx Find min |λ| y ← A(v) = A−1v

Matrix SHERMAN5

−200 −100 0 100 200 300 400 500 600−0.04

−0.03

−0.02

−0.01

0

0.01

0.02

0.03

0.04

real part

imagin

ary

part

0 2 4 6 8 10 1210−14

10−12

10−10

10−8

10−6

10−4

10−2

100

102

number of outer iterations

magnitude

ε

inner residual norm

exact res.

inexact residual

Krylov subspace methods – p. 31

Structural Dynamics

(A+ σB)x = b

Solve for many σ’s simultaneously ⇒ (AB−1 + σI)x = b

(Perotti & Simoncini 2002)

Inexact solutions with B at each iteration:

Prec. Fill-in 5 Prec. Fill-in 10e-time [s] # outer its e-time [s] # outer its

Tol 10−6 14066 296 13344 289Dynamic Tol 11579 301 11365 293

20 % enhancement with tiny change in the code

(Preconditioned CG-type iteration for B)

Krylov subspace methods – p. 32

Relaxed procedure

? A may be replaced by Aεiwith increasing εi and still converge

? Stable procedure for not too sensitive (e.g. non-normal) problems

Property inherent of Krylov approximation

Many more applications for this general setting

Krylov subspace methods – p. 33

Conclusions

Often, enough confidence to tailor methods to problems

Ability to relax many of the classical requirements

Further enhancements are ahead!

Recent Survey:Recent computational developments in Krylov Subspace Methods for linear systemsSimoncini & Szyld, 200559 pp., 352 referencesto appear in Numer. Linear Algebra w/Appl.

http://www.dm.unibo.it/˜simoncin

Krylov subspace methods – p. 34

Conclusions

Often, enough confidence to tailor methods to problems

Ability to relax many of the classical requirements

Further enhancements are ahead!

Recent Survey:Recent computational developments in Krylov Subspace Methods for linear systemsSimoncini & Szyld, 200559 pp., 352 referencesto appear in Numer. Linear Algebra w/Appl.

http://www.dm.unibo.it/˜simoncin

Krylov subspace methods – p. 34

Conclusions

Often, enough confidence to tailor methods to problems

Ability to relax many of the classical requirements

Further enhancements are ahead!

Recent Survey:Recent computational developments in Krylov Subspace Methods for linear systemsSimoncini & Szyld, 200559 pp., 352 referencesto appear in Numer. Linear Algebra w/Appl.

http://www.dm.unibo.it/˜simoncin

Krylov subspace methods – p. 34

Conclusions

Often, enough confidence to tailor methods to problems

Ability to relax many of the classical requirements

Further enhancements are ahead!

Recent Survey:Recent computational developments in Krylov Subspace Methods for linear systemsSimoncini & Szyld, 200559 pp., 352 referencesto appear in Numer. Linear Algebra w/Appl.

http://www.dm.unibo.it/˜simoncin

Krylov subspace methods – p. 34