A fast nonstationary preconditioning strategy for ill ...

43
A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology – U. Insubria (Italy) Joint work with M. Hanke (U. Mainz), D. Bianchi, A. Buccini (U. Insubria), Y. Cai, T. Z. Huang (UESTC, P. R. China) Firenze – 15 aprile 2015

Transcript of A fast nonstationary preconditioning strategy for ill ...

Page 1: A fast nonstationary preconditioning strategy for ill ...

A fast nonstationary preconditioning

strategy for ill-posed problems,with application to image deblurring

Marco Donatelli

Dept. of Science and High Tecnology – U. Insubria (Italy)

Joint work withM. Hanke (U. Mainz), D. Bianchi, A. Buccini (U. Insubria),

Y. Cai, T. Z. Huang (UESTC, P. R. China)

Firenze – 15 aprile 2015

Page 2: A fast nonstationary preconditioning strategy for ill ...

Outline

Ill-posed problems and iterative regularization

A nonstationary preconditioned iteration

Image deblurring

Combination with frame-based methods

Page 3: A fast nonstationary preconditioning strategy for ill ...

Outline

Ill-posed problems and iterative regularization

A nonstationary preconditioned iteration

Image deblurring

Combination with frame-based methods

Page 4: A fast nonstationary preconditioning strategy for ill ...

The model problem

Consider the solution of ill-posed equations

Tx = y , (1)

where T : X → Y is a linear operator between Hilbert spaces.

T is a compact operator, the singular values of T decaygradually to zero without a significant gap.

Assume that problem (1) has a solution x† of minimal norm.

GoalCompute an approximation of x† starting from approximate datay δ ∈ Y, instead of the exact data y ∈ Y, with

‖y δ − y‖ ≤ δ , (2)

where δ ≥ 0 is the corresponding noise level.

Page 5: A fast nonstationary preconditioning strategy for ill ...

Image deblurring problems

y δ = T ∗ x + ξ

T is doubly Toeplitz, large and severely ill-conditioned(discretizzation of an integral equations of the first kind)

y δ are known measured data (blurred and noisy image)

ξ is noise; ‖ξ‖ = δ

−→ discrete ill-posed problems (Hansen, 90’s)

Page 6: A fast nonstationary preconditioning strategy for ill ...

Regularization

The singular values of T are large in the low frequencies,decays rapidly to zero and are small in the high frequencies.

The solution of Tx = y δ requires some sort of regularization:

x =T †y δ = x† + T †ξ,

where ‖T †ξ‖ is large.

x† =⇒ x = T †y δ

Page 7: A fast nonstationary preconditioning strategy for ill ...

Regularization

The singular values of T are large in the low frequencies,decays rapidly to zero and are small in the high frequencies.

The solution of Tx = y δ requires some sort of regularization:

x =T †y δ = x† + T †ξ,

where ‖T †ξ‖ is large.

x† =⇒ x = T †y δ

Page 8: A fast nonstationary preconditioning strategy for ill ...

Regularization

The singular values of T are large in the low frequencies,decays rapidly to zero and are small in the high frequencies.

The solution of Tx = y δ requires some sort of regularization:

x =T †y δ = x† + T †ξ,

where ‖T †ξ‖ is large.

x† =⇒ x = T †y δ

Page 9: A fast nonstationary preconditioning strategy for ill ...

Tikhonov regularization

Balance the the data fitting and the “explosion” of the solution

minx‖Tx − y δ‖2 + α‖x‖2

which is equivalent to

x = (T ∗T + αI )−1T ∗y δ,

where α > 0 is a regularization parameter.

Page 10: A fast nonstationary preconditioning strategy for ill ...

Iterative regularization methods (semi-convergence)

Classical iterative methods firstly reduce the algebraic errorinto the low frequencies (well-conditioned subspace), whenthey arrive to reduce the algebraic error into the high frequen-cies then the restoration error increases because of the noise.

The regularization parameter is the stopping iteration.

0 10 20 30 400.05

0.07

0.1

0.2

iterations

rest

orat

ion

erro

r

Page 11: A fast nonstationary preconditioning strategy for ill ...

Preconditioned regularization

Replace the original problem Tx = y δ with

P−1Tx = P−1y δ

such that

1. inversion of P is cheap

2. P ≈ T but not too much (T † unbounded while P−1 must bebounded!)

Alert!Preconditioners can be used to accelerate the convergence, but animprudent choice of preconditioner may spoil the achievable qualityof computed restorations.

Page 12: A fast nonstationary preconditioning strategy for ill ...

Classical preconditioner

Historically, the first attempt of this sort was by Hanke, Nagy,and Plemmons (1993): In that work

P = Cε,

where Cε is the optimal doubly circulant approximation of T ,with eigenvalues set to be one for frequencies above 1/ε.Very fast, but the choice of ε is delicate and not robust.

Subsequently, other regularizing preconditioners have beensuggested: Kilmer and O’Leary (1999), Egger and Neubauer(2005), Brianzi, Di Benedetto, and Estatico (2008).

Page 13: A fast nonstationary preconditioning strategy for ill ...

Hybrid regularization

Combine iterative and direct regularization (Bjorck, O’Leary,Simmons, Nagy, Reichel, Novati, . . . ).

Main idea:

1. Compute iteratively a Krylov subspace by Lanczos or Arnoldi.2. At every iteration solve the projected Tikhonov problem in the

small size Krylov subspace.

Usually few iterations, and so a small Krylov subspace, areenough to compute a good approximation.

Page 14: A fast nonstationary preconditioning strategy for ill ...

Outline

Ill-posed problems and iterative regularization

A nonstationary preconditioned iteration

Image deblurring

Combination with frame-based methods

Page 15: A fast nonstationary preconditioning strategy for ill ...

Nonstationary iterated Tikhonov regularization

Given x0 compute for n = 0, 1, 2, . . .

zn = (T ∗T + αnI )−1T ∗rn , rn = y δ − Txn , (3a)

xn+1 = xn + zn . (3b)

This is some sort of regularized iterative refinement.

Choices of αn:

αn = α > 0, ∀n, stationary.

αn = αqn where α > 0 and 0 < q ≤ 1, geometric sequence(fastest convergence), [Groetsch and Hanke, 1998].

T ∗T + αI and TT ∗ + αI could be expensive to invert!

Page 16: A fast nonstationary preconditioning strategy for ill ...

The starting idea

The iterative refinement applied to the error equation Ten ≈ rnis correct up to noise, hence consider instead

Cen ≈ rn , (4)

possibly tolerating a slightly larger misfit.

Approximate T by C and iterate

hn = (C ∗C + αnI )−1C ∗rn , rn = y δ − Txn , (5)

xn+1 = xn + hn . (6)

Preconditioner ⇒ P = (C ∗C + αnI )−1C ∗

Page 17: A fast nonstationary preconditioning strategy for ill ...

Nonstationary preconditioning

Differences to previous preconditioners:

gradual approximation of the optimal regularization parameter

nonstationary scheme, not to be used in combinationwith cgls

essentially as fast as nonstationary iterated Tikhonovregularization

An hybrid regularization

Instead of projecting into a small size Krylov subspace, project theerror equation in a nearby space of the same size but where theoperator is diagonal (for image deblurring). The projected linearsystem (the rhs rn) changes at every iteration.

Page 18: A fast nonstationary preconditioning strategy for ill ...

Estimation of αn

Assumption:

‖(C − T )z‖ ≤ ρ ‖Tz‖ , z ∈ X , (7)

for some 0 < ρ < 1/2.

Adaptive choice of αn

Choose αn s.t. the (4) is solved up to a certain relative amount:

‖rn − Chn‖ = qn‖rn‖ , (8)

where qn < 1, but not too small (qn > ρ+ (1 + ρ)δ/‖rn‖).

Page 19: A fast nonstationary preconditioning strategy for ill ...

The Algorithm

Choose τ = (1 + 2ρ)/(1 − 2ρ) and fix q ∈ (2ρ, 1).While ‖rn‖ > τδ, let τn = ‖rn‖/δ, and compute αn s.t.

‖rn−Chn‖ = qn‖rn‖ , qn = max

q, 2ρ+(1+ρ)/τn

. (9a)

Then, updatehn = (C ∗C + αnI )

−1C ∗rn , (9b)

xn+1 = xn + hn , rn+1 = y δ − Txn+1 . (9c)

Details

The parameter q prevents that rn decreases too rapidly.

The unique αn can be computed by Newton iteration.

Page 20: A fast nonstationary preconditioning strategy for ill ...

Theoretical results [D., Hanke, IP 2013]

TheoremThe norm of the iteration error en = x† − xn decreasesmonotonically as long as

‖rn‖ ≤ τδ ≤ ‖rn−1‖, τ > 1 fixed.

TheoremFor exact data (δ = 0) the iterates xn converges to the solution ofTx = y that is closest to x0 in the norm of X .

TheoremFor noisy data (δ > 0), as δ → 0, the approximation xδ convergesto the solution of Tx = y that is closest to x0 in the norm of X .

Page 21: A fast nonstationary preconditioning strategy for ill ...

Extensions [Buccini, D., manuscript 2015]

Projection into convex set Ω:

xn+1 = PΩ(xn + hn).

In the computation of hn by Tikhonov, replace I with L,where L is a regularization operator (e.g., first derivative):

hn = (C ∗C + αnL∗L)−1C ∗rn ,

under the assumption that L and C have the same basis ofeigenvectors.

In both cases the previous convergence analysis can beextended even if it is not straightforward (take care ofN (L) . . . )

Page 22: A fast nonstationary preconditioning strategy for ill ...

Outline

Ill-posed problems and iterative regularization

A nonstationary preconditioned iteration

Image deblurring

Combination with frame-based methods

Page 23: A fast nonstationary preconditioning strategy for ill ...

Boundary Conditions (BCs)

zero Dirichlet Periodic

Reflective Antireflective

Page 24: A fast nonstationary preconditioning strategy for ill ...

The matrix C

Space invariant point spread function (PSF)

T has a doubly Toeplitz-like structure that carries the “correct”boundary conditions.

doubly circulant matrix C diagonalizable by FFT, thatcorresponds to periodic BCs.

The boundary conditions have a very local effect

T − C = E + R , (10)

where E is of small norm and R of small rank.

Page 25: A fast nonstationary preconditioning strategy for ill ...

Choice of parameters

We do not know whether a rigorous estimate of

‖(T − C )z‖ ≤ ρ ‖Tz‖

will hold for all (relevant) vectors z and ρ < 1/2.According to our numerical tests, the previous estimate issatisfied with ρ of a few percent, i.e., 10−3 or 10−2.

ρ = 10−3, checking the norm of the residual we can recognizeif ρ should be increased.

q = 0.7, but all q ∈ [ 0.6, 0.8 ] gives comparable results.

The regularization matrix L is the finite differencediscretization of the first derivative.

Page 26: A fast nonstationary preconditioning strategy for ill ...

Numerical methods

Our algorithms:

AIT (Approximated Iterated Tikhonov [D., Hanke, IP 2013])

APIT (Projected AIT [Buccini, D., 2015]) with Ωx : x ≥ 0.

ARIT (Regulized AIT [Buccini, D., 2015])

APRIT (APIT + ARIT [Buccini, D., 2015])

Compared methods:

Hybrid (Lanczos–Tikhonov [Chung et al., 2008])

TwIST [Bioucas-Dias, Figueiredo, 2007]

RRAT (Range restr. Arnoldi–Tikhonov [Lewis, Reichel, 2009])

NN-RS-GAT (Non negative Restarted Generalized ArnoldiTikhonov [Gazzola, Nagy, 2014])

Page 27: A fast nonstationary preconditioning strategy for ill ...

Numerical results

We add to the blurred image a white Gaussian noise with therelative amount of noise

ν =δ

‖y‖

To compare the quality of the restorations, we evaluate theirrelative restoration errors (RRE), i.e.,

RRE =‖x − x†‖

‖x†‖,

where x is the computed solution.

Page 28: A fast nonstationary preconditioning strategy for ill ...

Example 1 (Satellite)

ν = 1%, Zero BCs.

True image PSF Observed image

Page 29: A fast nonstationary preconditioning strategy for ill ...

Restorations

Method RREAIT 0.30218APIT 0.27493ARIT 0.30197APRIT 0.27821Hybrid 0.40085TwIST 0.42649RRAT 0.30838

NN-RS-GAT 0.32599

AIT APIT

RRAT NN-RS-GAT

Page 30: A fast nonstationary preconditioning strategy for ill ...

Example 2 (Barbara)

ν = 3%, motion blur, Antireflective BCs.

True image Observed image

Page 31: A fast nonstationary preconditioning strategy for ill ...

Restorations

Method RREAIT 0.13838APIT 0.13838ARIT 0.13275APRIT 0.13275Hybrid 0.15919

(Optimal) 0.13337TwIST 0.14264RRAT 0.16293

NN-RS-GAT 0.16502

AIT ARIT

Hybrid (Opt.) TwIST

Page 32: A fast nonstationary preconditioning strategy for ill ...

Outline

Ill-posed problems and iterative regularization

A nonstationary preconditioned iteration

Image deblurring

Combination with frame-based methods

Page 33: A fast nonstationary preconditioning strategy for ill ...

Synthesis approach

Images have a sparse representation in the wavelet domain.

Let W ∗ be a wavelet or tight-frame synthesis operator(W ∗W = I ) and v the frame coefficients such that

x = W ∗v .

The deblurring problem can be reformulated in terms of theframe coefficients v as

minv∈Rs

µ‖v‖1 +1

2λ‖v‖2 : v = arg min

v∈Rs‖TW ∗v − y δ‖2P

.

(11)

Page 34: A fast nonstationary preconditioning strategy for ill ...

Modified Linearized Bregman algorithm (MLBA)

Denote by Sµ the soft-thresholding function

[Sµ(v)]i = sgn(vi )max |vi | − µ, 0 . (12)

The MLBA proposed in [Cai, Osher, and Shen, SIIMS 2009]

zn+1 = zn +WT ∗P(y δ − TW ∗vn),vn+1 = λSµ(z

n+1),(13)

where z0 = v0 = 0.

Choosing P = (TT ∗ + αI )−1 ⇒ λ = 1 the iteration (13)converges to the unique minimizer of (11).

The authors of MLBA proposed to use P = (CC ∗ + αI )−1.

If vn = zn the first equation (inner iteration) of MLBA ispreconditioned Landweber.

Page 35: A fast nonstationary preconditioning strategy for ill ...

AIT + Bregman splitting

Replace preconditioned Landweber with AIT.

Usual assumption

‖(C − T )u‖ ≤ ρ ‖Tu‖ , u ∈ X .

Further assumption

‖CW ∗(v − Sµ(v))‖ ≤ ρδ, ∀ v ∈ Rs , (14)

which is equivalent to consider the soft-threshold parameterµ = µ(δ) and such that µ(δ) → 0 as δ → 0.

Page 36: A fast nonstationary preconditioning strategy for ill ...

AIT + Bregman splitting – 2

Algorithm [Cai, D., Bianchi, Huang, 2015]

zn+1 = zn +WC ∗(CC ∗ + αnI )−1(y δ − TW ∗vn),

vn+1 = Sµ(zn+1),

(15)

where the parameter (αn, stopping iteration, etc.) are fixed asin AIT.

TheoremFor noisy data (δ > 0), as δ → 0, the approximation xδ convergesto the solution of Tx = y that is closest to x0 in the norm of X .

If an estimation of the best α is available, we can fix αn = αopt.

Page 37: A fast nonstationary preconditioning strategy for ill ...

Numerical Results

W by linear B-spline.

PSNR = 20 log10255·n‖x−x‖ , with x the computed approximation.

Best regularization parameter by hand for every method.

Compared methods

MLBA: iteration (13) by Cai, Osher, and Shen.

AIT–Breg: our nonstationary iteration (15).

AIT–Breg–opt: our iteration (15) with a stationary αn = αopt

chosen by hand like in MLBA.

FA–MD, TV–MD: ADMM [Almeida, Figueiredo, IEEE 2013]for Frame-based Analysis and Total Variation, respectively.

FTVd: extension of FTVd in [Wang et al. SIIMS 2008] todeal with boundary artifacts [Bai et al., 2014].

Page 38: A fast nonstationary preconditioning strategy for ill ...

Example 3 (Saturn)

ν = 1%, Zero BCs.

True image PSF Observed image

Page 39: A fast nonstationary preconditioning strategy for ill ...

Restorations

Method PSNR CPU timeAIT–Breg 31.25 10.32

AIT–Breg–opt 31.49 16.56MLBA 30.97 200.99FA–MD 30.87 90.85TV–MD 31.17 47.61FTVd: 30.50 1.75

MLBA AIT–Breg FTVd

Page 40: A fast nonstationary preconditioning strategy for ill ...

Example 4 (Boat)

ν = 1%, Antireflective BCs.

True image PSF Observed image

Page 41: A fast nonstationary preconditioning strategy for ill ...

Restorations

Method PSNR CPU timeAIT–Breg 29.77 19.57

AIT–Breg–opt 30.17 3.67MLBA 29.43 34.26FA–MD 29.61 15.95TV–MD 29.87 16.74FTVd: 28.95 0.73

MLBA AIT–Breg–opt TV–MD

Page 42: A fast nonstationary preconditioning strategy for ill ...

Conclusions

Under the assumption that an approximation C of T isavailable, our new scheme turns out to be fast and stable.

The choice of ρ reflect how much we trust in the previousapproximation (a too small ρ can be detected by αn or ‖rn‖).

Our scheme does not require T ∗.

Projection into a convex set can be added.

It is possible to include a regularization matrix.

It can be used as inner least-square iteration in nonlinearmethods.

Page 43: A fast nonstationary preconditioning strategy for ill ...

References

M. Donatelli, M. HankeFast nonstationary preconditioned iterative methods forill-posed problems, with application to image deblurring,Inverse Problems, 29 (2013) 095008.

A. Buccini, M. DonatelliProjected regularized iterated Approximated Tikhonov,manuscript, 2015.

Y. Cai, M. Donatelli, D. Bianchi, T. Z. HuangRegularization preconditioners for frame-based imagedeblurring with reduced boundary artifacts,manuscript, 2015.