Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image...

30
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited. SIAM J. IMAGING SCIENCES c 2013 Society for Industrial and Applied Mathematics Vol. 6, No. 3, pp. 1689–1718 Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring Qiegen Liu , Dong Liang , Ying Song § , Jianhua Luo , Yuemin Zhu , and Wenshu Li ∗∗ Abstract. This paper presents an efficient alternating direction method with patch-based dictionary updating, ADMDU-DEB, for sparse representation regularization framework of image deblurring. The main idea of the proposed method is to reformulate the variational problem as a linear equality constrained problem and then minimize its augmented Lagrangian function. The alternating direction method decouples the minimization by alternately iterating the pixel-based regularization and the patch- based sparse representation. Typically, accelerated sparse coding and simple dictionary updating applied in the sparse representation stage enable the whole algorithm to converge at a relatively small number of iterations. Additionally, the approach is readily extended to solve the same kind of variational problem with a nonnegativity constraint. Experimental results on benchmark test images consistently validate the superiority of the proposed approach and demonstrate that it achieves very competitive deblurring performance, compared with state-of-the-art deconvolution algorithms. Key words. image restoration, sparse representation, dictionary updating, augmented Lagrangian, alternating direction method, modified ISTA AMS subject classifications. 68U10, 62H35, 65K10 DOI. 10.1137/110857349 1. Introduction. In this paper, we propose a robust and fast algorithm for image deblur- ring problems. In this class of problem, a noisy indirect observation f , of an original image Received by the editors November 30, 2011; accepted for publication (in revised form) June 3, 2013; published electronically September 17, 2013. This work was partly supported by High Technology Research Development Plan (863 Plan) of People’s Republic of China under 2006AA020805, the NSFC of China under 30670574, 61262084, 61250005, 61102043, and 20132BAB211030, Shanghai International Cooperation Grant under 06SR07109, Region Rhˆ one-Alpes of France under the project Mira Recherche 2008, and the joint project of Chinese NSFC under 30911130364 and French ANR 2009 under ANR-09-BLAN-0372-01. http://www.siam.org/journals/siims/6-3/85734.html Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China, and Depart- ment of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China, and Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Key Laboratory for MRI, Shenzhen Institutes of Advanced Tech- nology, Shenzhen 518055, China ([email protected]). Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Key Laboratory for MRI, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China ([email protected]). § Department of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China (gangyang@sjtu. edu.cn). College of Aeronautics and Astronautics, Shanghai Jiao Tong University, Shanghai 200240, China (jhluo@sjtu. edu.cn). CREATIS, CNRS UMR 5220, Inserm U 630, INSA Lyon, University of Lyon 1, Lyon, France (yue-min.zhu@ creatis.insa-lyon.fr). ∗∗ College of Informatics and Electronics, Zhejiang Sci-Tech University, Hangzhou 310018, China (wshlee@163. com). 1689 Downloaded 11/24/14 to 75.102.71.33. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Transcript of Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image...

Page 1: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

SIAM J. IMAGING SCIENCES c© 2013 Society for Industrial and Applied MathematicsVol. 6, No. 3, pp. 1689–1718

Augmented Lagrangian-Based Sparse Representation Method with DictionaryUpdating for Image Deblurring∗

Qiegen Liu†, Dong Liang‡, Ying Song§, Jianhua Luo¶, Yuemin Zhu‖, and Wenshu Li∗∗

Abstract. This paper presents an efficient alternating direction method with patch-based dictionary updating,ADMDU-DEB, for sparse representation regularization framework of image deblurring. The mainidea of the proposed method is to reformulate the variational problem as a linear equality constrainedproblem and then minimize its augmented Lagrangian function. The alternating direction methoddecouples the minimization by alternately iterating the pixel-based regularization and the patch-based sparse representation. Typically, accelerated sparse coding and simple dictionary updatingapplied in the sparse representation stage enable the whole algorithm to converge at a relativelysmall number of iterations. Additionally, the approach is readily extended to solve the same kind ofvariational problem with a nonnegativity constraint. Experimental results on benchmark test imagesconsistently validate the superiority of the proposed approach and demonstrate that it achieves verycompetitive deblurring performance, compared with state-of-the-art deconvolution algorithms.

Key words. image restoration, sparse representation, dictionary updating, augmented Lagrangian, alternatingdirection method, modified ISTA

AMS subject classifications. 68U10, 62H35, 65K10

DOI. 10.1137/110857349

1. Introduction. In this paper, we propose a robust and fast algorithm for image deblur-ring problems. In this class of problem, a noisy indirect observation f , of an original image

∗Received by the editors November 30, 2011; accepted for publication (in revised form) June 3, 2013; publishedelectronically September 17, 2013. This work was partly supported by High Technology Research Development Plan(863 Plan) of People’s Republic of China under 2006AA020805, the NSFC of China under 30670574, 61262084,61250005, 61102043, and 20132BAB211030, Shanghai International Cooperation Grant under 06SR07109, RegionRhone-Alpes of France under the project Mira Recherche 2008, and the joint project of Chinese NSFC under30911130364 and French ANR 2009 under ANR-09-BLAN-0372-01.

http://www.siam.org/journals/siims/6-3/85734.html†Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China, and Depart-

ment of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China, and Paul C. LauterburResearch Center for Biomedical Imaging, Shenzhen Key Laboratory for MRI, Shenzhen Institutes of Advanced Tech-nology, Shenzhen 518055, China ([email protected]).

‡Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Key Laboratory for MRI, ShenzhenInstitutes of Advanced Technology, Shenzhen 518055, China ([email protected]).

§Department of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China ([email protected]).

¶College of Aeronautics and Astronautics, Shanghai Jiao Tong University, Shanghai 200240, China ([email protected]).

‖CREATIS, CNRS UMR 5220, Inserm U 630, INSA Lyon, University of Lyon 1, Lyon, France ([email protected]).

∗∗College of Informatics and Electronics, Zhejiang Sci-Tech University, Hangzhou 310018, China ([email protected]).

1689

Dow

nloa

ded

11/2

4/14

to 7

5.10

2.71

.33.

Red

istr

ibut

ion

subj

ect t

o SI

AM

lice

nse

or c

opyr

ight

; see

http

://w

ww

.sia

m.o

rg/jo

urna

ls/o

jsa.

php

Page 2: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

1690 Q. LIU, D. LIANG, Y. SONG, J. LUO, Y. ZHU, AND W. LI

u, is modeled as

f = Au+ ω,(1)

where A is a general convolution operator and ω is a white Gaussian noise with variance std2.The image denoising problem is a special case of problem (1) where A = 1 is the identityoperator.

The image deblurring problem (also referred to as deconvolution or more generally asrestoration) has a long and rich history. To date, there have been some standard and populardeblurring methods such as Wiener deconvolution [25], Lucy Richardson deconvolution [40],and regularized deconvolution [6, 36, 4]. In this paper, we concentrate on the variationalapproach, particularly sparsity-promoting regularization methods, which restore the imagestably while incorporating prior information effectively. In fact, the restoration/recoveryproblem (1) is often ill-posed, so some prior information of the underlying image needs tobe utilized to alleviate the illness. This leads to the following optimization problem:

minu

{J(u) +

μ

2‖Au− f‖22

},(2)

where J(·) represents the penalty functional of the given prior information. μ > 0 standsfor the regularization parameter, which needs to be selected as a tradeoff between the priorinformation and the observed image data.

Tikhonov-like quadratic regularization leads to a closed-form solution that can be nu-merically implemented efficiently. However, with the advent of sparse representation theory,sparsity-inducing regularizers (e.g., l0- and l1-based regularization) have increasingly gainedpopularity (see Table 1). One of the most popular choices is the total variation (TV) priority,which assumes that the underlying images consist of piecewise constant areas. A number ofrecently studied works have been developed based on this priority, such as two-step iterativeshrinkage/thresholding (TwIST) [6], Nesterov [36], fast ISTA-TV (FISTA-TV) [4], fast-TV[28], and the fast total variation deconvolution (FTVd) algorithm [51]. Due to the drawbackthat TV that may result in a blocky effect, some improved methods were proposed [10]. Formore details about the implementation and software of TV, interested readers can refer to [15].Wavelet shrinkage is also a popular alternative choice. A hybrid Fourier-wavelet regularizeddeconvolution (ForWaRD) algorithm by Neelanmani, Choi, and Baraniuk [35] used shrinkagein both Fourier and wavelet domains to recover images while preserving edges and removingnoise. Some other methods like gradient projection for sparse representation (GPSR) withwavelet [23] and L0 analysis-based sparse optimization (L0-AbS) [39] were also developed inrecent years. Though wavelet methods can efficiently represent classes of signals containingsingularities, choosing a proper basis for different images is difficult. Additionally, there aremany noise residuals and artifacts around edges in the images deblurred by the wavelet meth-ods. Essentially, for natural images, there is a rich number of different local structural patternswhich cannot be well represented by using only one fixed transform or basis. Therefore, TVand wavelet models will introduce various faults in the deblurring output.

In the last few years, inspired by the successful application of the nonlocal similarityproperty of small image patches in image denoising [7, 13, 20], many authors tried to incor-porate more general nonlocal and similarity prior information, or named patchwise (based)D

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 3: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

AL-BASED DICTIONARY UPDATING FOR IMAGE DEBLURRING 1691

sparsity, into the inverse problems. In [29, 63], the authors applied nonlocal total variation(NLTV) regularization, which replaces the conventional gradient functional used in TV bya weighted nonlocal gradient function for reducing the blocky effect of TV regularization,to improve deblurring and recovery results. In [14, 16], the state-of-the-art block-matchingand 3D filtering (BM3D) denoising method was extended to the deblurring application. Asa two-stage algorithm [14], BM3D deblurring (BM3DDEB) utilizes the BM3D modeling toobtain the initial estimate at first and then uses the estimate for an empirical Wiener filtering.In [16], the authors further proposed the iterative decoupled deblurring BM3D (IDD-BM3D)algorithm, which considers the deblurring problem to be a minimization of two objective func-tions based on the Nash equilibrium balance. Hence a decoupled deblurring algorithm wasdeveloped. Numerical experiments showed that the decoupled formulation enables more effec-tive exploiting of the BM3D-modeling than the two-stage approach of BM3DDEB. Recently,an adaptive sparse domain selection (ASDS) scheme was introduced by Dong et al. [19]. TheASDS method learns a series of compact subdictionaries and adaptively assigns each localpatch a subdictionary as the sparse domain. One key procedure in the scheme is that itssubdictionaries are learned offline from a training dataset, which is reached by a K-meansclustering algorithm and the principal component analysis (PCA) technique. Apart from se-lecting from online subdictionaries the subdictionary best fitted to code the given local imagepatch, they improved the ASDS scheme by introducing two types of adaptive regularizationterms—the autoregressive model-based regularization and nonlocal similarity regularization.

Under the assumption that each image patch can be sparsely represented [21, 3], the K-singular value decomposition (K-SVD) algorithm was proposed by Elad and Aharon [20] forimage denoising. The extensions of using learned dictionaries in deblurring have been drawingmuch attention recently [62, 61, 33, 27, 12, 56]. These algorithms can be grouped into twobroad categories. The first utilizes the sparse representation of image patches as a priorto regularize the ill-posed inverse problem. In [62], Zhang and Zhang extended the sparserepresentation based method for nonblind images, with an alternating iterative procedurebetween conventional Landweber update and sparse representation. In [61], they furtherdeveloped a blind image deblurring method based on sparse representation. The second classof the regularization methods exploits the assumption that pairs of blurry/sharp patchesfrom blurred and original images share the same sparse representation coefficients, under thepairs of low-/high-resolution dictionaries [33, 27, 12]. This idea has previously been used forimage super resolution by Yang et al. [56]. Nevertheless, such a strategy has an intrinsicdrawback, particularly for blind image deblurring, due to the unknown blurring kernel; thusthe construction of the coupled dictionary is not an easy task. In [27], Hu, Huang, and Yangconstructed the blurry-sharp dictionary couple via the blurry image and deblurred imageusing the current estimation of the kernel. However, as stated in [61], since the deblurringprocedure usually introduces severe artifacts, the dictionary pair constructed via this methodmay not be desirable for deblurring. It should be emphasized that in all of these methodsexcept for that in [12], the sparse representation step is implemented by the K-SVD algorithm[62, 61, 33, 27, 56].

Since patch-based methodology can capture local image features effectively and recoverimages more robustly, exploiting patch-based sparsity is an appealing research direction. Twoissues in these patch-based highly nonlinear problems are the computational complexity andD

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 4: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

1692 Q. LIU, D. LIANG, Y. SONG, J. LUO, Y. ZHU, AND W. LI

Table 1The commonly used sparsity inducing regularizers.

Regularization term J(u) References

Total variation (TV)

∑i

‖∇iu‖1 [6],[36],[4],

[28],[51],[10],[15]

Wavelet

∑i‖Ψiu‖1, [6],[35],

where Ψ is the wavelet transform [23],[39]

Nonlocal total variation

(NLTV) and its variant BM3D

∫Ω|∇wu(x)|dx =

∫Ω

√∫Ω(u(x)− u(y))2w(x, y)dydx, [29],[63],

where w(x, y) = exp{− G�(‖u(x+·)−u(y+·)‖2)(0)

2h2

}[14],[16]

Dictionary learning (DL) J(u) = minD,Γ

∑l

(‖αl‖1 + λ2‖Dαl −Rlu‖22

)[20], [19],[62],[61]

the sensitivity to initiation, due to the nonconvex nature of the problem. They are especiallysevere in dictionary learning related problems [20, 21, 3, 48]. To address these issues, we havedeveloped a class of augmented Lagrangian (AL)/Bregman iterative based dictionary learningmethods [32, 31]. Motivated by the predual proximal point algorithm (PPPA) [34], a predualdictionary learning (PDL) method was proposed in [32]. It extends PPPA by updating thedictionary via a gradient descent after each inner minimization step of PPPA. Theoretic anal-ysis illustrates that PDL has the excellent property of being in favor of dictionary updatingand better image recovery. However, PDL is devoted to solving the problem with nonnega-tive variables. In [31], we proposed the augmented Lagrangian multiscale dictionary learning(ALM-DL) method to handle the dictionary learning problem directly. Compared to conven-tional dictionary learning methods, ALM-DL method has two advantages; i.e., the methodis less dependent on the initial dictionary, and it changes the initial dictionary drastically atthe first few steps. Numerical studies in [32] and [31] showed the superior performance of ourpresented approaches.

In this work, we focus on developing robust and efficient methods with dictionary updatingto handle general inverse problems such as deblurring by extending the ideas inherent in[32, 31]. There are two main contributions of this paper. First, we apply the variable-splitting and alternating-minimization methodology to solve problem (2) with patch-basedregularization, whose method is denoted as ADMDU-DEB. Particularly, the whole algorithmalternately updates the patch-based adaptive dictionary and the least-squares minimizationinvolving the underlying images. As the iterative process proceeds, one advantage of thisupdating strategy is that the recovered image will exhibit more and more fine structures beforereaching the stopping criterion. Specifically, an accelerated strategy is applied to the sparsecoding step of the inner minimization, enabling the efficiency of the whole algorithm. Second,we further extend the method to solve the patch-based problem (2) with a nonnegativityconstraint, whose method is denoted as NNADMDU-DEB. As we have known, utilizing theconstraint of prior information of the solution can enhance the recovery accuracy. The mostcommon such constraint is to be nonnegative, which is physically meaningful [30, 11, 42].We demonstrate that our extended method can improve the reconstruction quality withoutD

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 5: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

AL-BASED DICTIONARY UPDATING FOR IMAGE DEBLURRING 1693

adding computational load.The rest of this paper is organized as follows. Section 2 states the prior work of the

AL scheme and reviews some notation of the Bregman iterative method (or equivalent ALscheme). In section 3, we propose the efficient dictionary updating strategy for the alternativeminimization algorithm ADMDU-DEB for solving the inverse problem (1). The other variableupdating ingredients in the whole alternative minimization algorithm are also discussed. Insection 4, the extended version NNADMDU-DEB is developed to address problem (1) with anonnegativity constraint. Various numerical experiments with comparison to other state-of-the-art methods are conducted in section 5. Section 6 concludes the paper.

2. Preliminaries.

2.1. Review of the AL and Bregman iteration methods. In recent years, one of themost promising optimization techniques used in various image recovery problems has beenthe AL scheme related method [41, 1, 2, 55, 49]. Although the AL method is a well stud-ied optimization algorithm for solving constrained problems in mathematical programming[41], it is enjoying a repopularization mainly due to the work of Osher and his colleagues.Specifically, a closely related algorithm is the Bregman iterative method [45, 37, 24]. Theyare identical when the constraint of the optimization problem is linear. The relation betweenthese two methods has been comprehensively discussed in [60, 22, 54]. Operator splitting andalternating minimization are two basic ingredients of these methods. These methods oftenfirst employ operator splitting to transform the original unconstrained minimization problemto the equivalent constrained problem, which is handled by AL scheme. Then the alternatingminimization strategy (or alternating direction method (ADM)) is used to iteratively findsolutions of the subproblems. Generally speaking, the AL scheme aims to solve the followingcost functional:

minα∈Ω

E(α) s.t. I − Dα = 0,(3)

where H(α) = I − Dα. Problem (3) can be solved via the standard AL formulation:⎧⎨⎩ αk+1 = argminα∈Ω

E(α) + 〈skβ, I − Dα〉+ β2 ‖I − Dα‖22,

sk+1β = skβ + β(I − Dαk+1).

(4)

By letting s = sβ/β, the iterative scheme (4) becomes an alternative form of the ALalgorithm, often called the Bregman iterative method [37], which is reformulated as follows:⎧⎨⎩ αk+1 = argmin

α∈ΩE(α) + β

2 ‖I + sk − Dα‖22,sk + I = sk+1 + Dαk+1.

(5)

Usually the Bregman iteration method is also called the iterated refinement method. Asexplained in [60, 8], the iterative procedure (5) has an intriguing multiscale interpretation: ateach iteration k it adds the “small scales” sk back to I and performs the inner minimizationwith I replaced by I + sk, which is then decomposed into “large scale” Dαk+1 and “smallscale” sk+1.D

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 6: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

1694 Q. LIU, D. LIANG, Y. SONG, J. LUO, Y. ZHU, AND W. LI

2.2. Literature review of AL scheme applied to image deblurring. Recently, the ADMhas been successfully applied to many optimization problems that arise from different appli-cations. Based on the variable splitting technique for the TV regularizer as shown in (6), theADM has been applied to image deblurring [50] (denoted as ADM-TV), magnetic resonanceimage reconstruction from partial frequencies [59], compressed sensing [57], and other imagerecovery problems.

minu,v

{∑i

‖vi‖1 +μ

2‖Au− f‖22

}s.t. vi = ∇iu.(6)

ADM-TV is faster than FTVd [51] in that the AL-based alternating minimization is farbetter than the penalty method with continuation as implemented in FTVd. It is implementedas version 4.0 of the FTVd package (FTVd 4.01). Similar ideas can also be found in theliterature [24, 9]. Specifically, Chan et al. [9] proposed a faster and robust method. Itsmain difference from the ADM-TV method lies in the update scheme of its AL parameter.As analyzed in the introduction, a natural question is whether the AL technique and fastiterative scheme can be applied to minimization problem (2) when the regularization term isJNLTV (u) (See Table 1). Unfortunately it fails because of the nonlinear property of the NLTVregularizer. A recent method BOS-NLTV developed by Zhang et al. [63] used an alleviatingstrategy. It used the operator splitting (OS) technique to decouple the minimization problemto a two-step alternative strategy, an NL-TV denoising step followed by a gradient descentstep, as follows:

u1,k = uk − δAT (Auk − f),(7)

uk+1 = argminu

{JNLTV (u) +

μ

2δ‖u− u1,k‖22

},(8)

where 0 < δ < 2/‖ATA‖. Though some efficient numerical results are obtained, it does notbenefit from the separation of the NLTV regularizer and the fast fourier transform (FFT) asADM is applied to TV.

Another successful application of ADM to the imaging inverse problem is the split aug-mented Lagrangian shrinkage algorithm (SALSA) presented by Afonso, Bioucas-Dias, andFigueiredo [1]. In contrast to ADM-TV, which aims to address the difficulty raised by anonseparable and nonquadratic regularizer like

∑i‖∇iu‖1, they intend to exploit the second

order (Hessian) information of the data-fidelity term and define the following constrainedoptimization formulation:

minu,v

{J(u) +

μ

2‖Av − f‖22

}s.t. v = u,(9)

where J(u) can be∑

i‖∇iu‖1 or∑

i‖Ψiu‖1 as shown in Table 1. They concluded that thesubpenalty functional relevant to variable v in the AL scheme is a strictly convex quadraticfunction and the minimization with respect to variable v can be solved exactly and quickly;thus the efficiency of the whole algorithm is fully utilized. Notice that when restricting thedeblurring problem with the TV regularizer, their computational time is more than that ofADM-TV since the computation of TV in ADM-TV benefits from FFT.

1http://www.caam.rice.edu/∼optimization/L1/ftvd/v4.0/Dow

nloa

ded

11/2

4/14

to 7

5.10

2.71

.33.

Red

istr

ibut

ion

subj

ect t

o SI

AM

lice

nse

or c

opyr

ight

; see

http

://w

ww

.sia

m.o

rg/jo

urna

ls/o

jsa.

php

Page 7: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

AL-BASED DICTIONARY UPDATING FOR IMAGE DEBLURRING 1695

3. Proposed algorithm. In this section, we adopt the patch-based sparse representationregularizer for image deblurring. In particular, unlike the splitting technique applied in theTV penalty term for ADM-TV and the data-fidelity term for SALSA, the key innovation inour method is the splitting technique applied in the sparse representation regularization term,which follows and extends the spirit of our previous methods PDL and ALM-DL for imagedenoising [32, 31].

3.1. The whole scheme of the proposed algorithm. Employing the patch-based sparserepresentation as the sparse-prompting regularizer, we obtain the following variational prob-lem:

minu

{minD,Γ

∑l

‖αl‖1 +

λ

2‖Dα

l−R

lu‖22}+

μ

2‖Au− f‖22,(10)

where Rlu denotes a vectored form of the lth patch extracted from image u, D is the unknown

dictionary, and Γ = [α1 , α2 , . . . , αL] is the matrix of sparse coding coefficients.

As in [50], the primary innovation of the strategy to address problem (1) is to apply theAL technique to tackle the unconstrained problem in (10). For convenience of expression, wefirst assume the dictionary D to be fixed temporarily, and accordingly the minimization of(10) yields

minu

{minΓ

∑l

‖αl‖1 +

λ

2‖Dα

l−R

lu‖22}+

μ

2‖Au− f‖22.(11)

We convert the unconstrained problem to an equivalent constrained optimization problemusing the variable splitting technique, where the auxiliary variable z

ltakes the place of linear

transformations of one base variable (containing the component u and αl), i.e.,

minu

{minΓ

∑l

‖αl‖1 +

λ

2‖z

l‖22}+

μ

2‖Au− f‖22(12)

s.t. Rlu = Dα

l+ z

l, l = 1, 2, . . . , L.

At this point, we are in a position to clearly explain the difference between this forma-tion and the splitting exploited in ADM-TV and SALSA for image recovery. First, ADM-TVdevotes itself to the nonseparability of the regularization term, and SALSA aims at the nonsep-arability of the data-fidelity term; both are made in the pixel domain. However, the variablesplitting used in our formulation is conducted in the image-patch domain. Second, we notonly pursue the sparse coding of variables but also update the dictionary. Although no globalsolutions can be found due to the nonconvexity of the problem, we point out that, makinguse of variable splitting and the corresponding AL scheme, the dictionary updating proposedin this work possesses an excellent iterative property.

We construct a corresponding AL function and iterate it as follows:

(αk+1l

, zk+1l

, uk+1) = arg minαl,z

l,uL′β(Γ, Z, u), l = 1, 2, . . . , L,(13)

yk+1l

= ykl+ β(−Dαk+1

l− zk+1

l+R

luk+1), l = 1, 2, . . . , L,(14)

Dow

nloa

ded

11/2

4/14

to 7

5.10

2.71

.33.

Red

istr

ibut

ion

subj

ect t

o SI

AM

lice

nse

or c

opyr

ight

; see

http

://w

ww

.sia

m.o

rg/jo

urna

ls/o

jsa.

php

Page 8: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

1696 Q. LIU, D. LIANG, Y. SONG, J. LUO, Y. ZHU, AND W. LI

where L′β(Γ, Z, u) =

∑Ll=1 Lβ(αl

, zl, u, yk

l) �

∑Ll=1‖αl

‖1 + λ2‖zl‖22 + 〈−yk

l,Dα

l+ z

l− R

lu〉 +

β2 ‖Dα

l+ z

l−R

lu‖22+ μ

2‖Au− f‖22, ykl , l = 1, 2, . . . , L, are Lagrangian multipliers and β > 0 is

a penalty parameter. Z = [z1 , z2 , . . . , zL ]. In the following content, the term 〈ykl,Dα

l+ z

l−

Rlu〉+ β

2 ‖Dαl+ z

l−R

lu‖22 is usually replaced by β

2 ‖Dαl+ z

l−R

lu− yk

l/β‖22 for convenience.

The alternating direction method (ADM) is applied to decouple the minimization andsimplify the optimization problem, resulting in the following iterative scheme:

(αk+1l

, zk+1l

) = argminαl,z

l

‖αl‖1 +

λ

2‖z

l‖22 +

β

2‖Dα

l+ z

l−R

luk − yk

l/β‖22, l = 1, 2, . . . , L,(15)

(16) uk+1 = argminu

{∑l

β

2‖Dk+1αk+1

l+ zk+1

l−R

lu− yk+1

l/β‖22 +

μ

2‖Au− f‖22

},

(17) yk+1l

= ykl+ β(−Dαk+1

l− zk+1

l+R

luk+1), l = 1, 2, . . . , L.

After applying the variable splitting and alternating minimization methodology to solveproblem (11), the subproblems will be addressed in subsections 3.2 and 3.3, respectively, bytaking advantage of the particular structure of the resulting problems.

Now we consider the updating of dictionary D. In the conventional dictionary learningapproach the dictionary is updated after achieving the optimal minimization of (11), andthe whole learning procedure loops in an alternative way until it satisfies some conditions[21, 3]. In contrast, here we update the dictionary after each inner iteration of (13)–(14),similarly as in PDL and ALM-DL [32, 31]. A direct intuition is that if we extend the penaltyfunctional L′

β(Γ, Z, u) to L′β(Γ, Z, u,D) by introducing a new variable D, then updating the

dictionary D will substantially decrease the objective value. Taking the derivative of theextended functional L′

β(Γ, Z, u,D) with respect to D, we attain the following gradient descentupdate rule:

Dk+1 = Dk + ζ∂L′

β(Γk+1, Zk+1, uk+1,D)

∂D

∣∣∣∣D=Dk

(18)

= Dk − ζ[− Y k + β(DkΓk+1 + Zk+1 −Ruk+1)

](Γk+1)T

= Dk + ζY k+1(Γk+1)T .

After the gradient descent updating, the columns of the designed dictionary (representedby dq , q = 1, 2, . . . , Q) are additionally constrained to be of unit norm in order to avoid thescaling ambiguity [21, 3]. One property that should be noted is that each dictionary updatecan be considered as a refinement operation. For more information, please refer to [32, 31].

To sum up, solving problem (11) combined with the dictionary updating scheme in (18)tackles problem (10). The general description of the resulting algorithm ADMDU-DEB islisted in Algorithm 1, whose implementation and convergence will be discussed in subsection3.4.D

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 9: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

AL-BASED DICTIONARY UPDATING FOR IMAGE DEBLURRING 1697

Algorithm 1 The general description of ADMDU-DEB.

1: Initialization: Γ0 = 0; C0 = 0; D0; u0; μ; β; λ and ζ2: while stop-criterion not satisfied(loop in k): do3: // Solve subproblem with image patches: Z and Γ

4: {Zk+1,Γk+1} = argminZ,Γ

(∑l‖αl

‖1 + λ2‖zl‖22 + β

2 ‖Dαl+ z

l−R

luk − yk

l/β‖22

)5: // Solve the target solution: u

6: uk+1 = argminu

{∑lβ2 ‖Dαk+1

l+ zk+1

l−R

lu− yk+1

l/β‖22 + μ

2 ‖Au− f‖22}

7: Y k+1 = Y k + β(−DΓk+1 − Zk+1 +Ruk+1)8: Dk+1 = Dk + ζY k+1(Γk+1)T ; dk+1

q= dk+1

q/‖dk+1

q‖2, q = 1, 2, . . . , Q,

9: end while

3.2. The u-subproblem. For the subproblem with respect to variable u, the minimalsolution is updated at each inner iteration of the AL iterative process as follows:

uk+1 = argminu

{∑l

β

2‖Dk+1αk+1

l+ zk+1

l−R

lu− yk+1

l/β‖22 +

μ

2‖Au− f‖22

}.(19)

The least squares solution satisfies the normal equation(μATA+ β

∑l

RTlR

l

)uk+1(20)

=(μAT f + β

∑l

RTl

(Dk+1αk+1

l+ zk+1

l− yk+1

l/β))

.

Under the periodic boundary condition for u, both∑

l RTlR

l(especially

∑l R

TlR

l= ωIN )

and ATA are block circulant. Therefore, the Hessian matrix on the left-hand side of (20)can be diagonalized by two-dimensional discrete Fourier transform F . Using the convolutiontheorem of Fourier transforms, we can write

uk+1 = F−1

(μF (A)∗ ◦ F (f) + βF

(∑l R

Tl

(Dk+1αk+1

l+ zk+1

l− yk+1

l/β))

μF (A)∗ ◦ F (A) + βF∑

l RTlR

lF T

),(21)

where ∗ denotes complex conjugacy, ◦ denotes componentwise multiplication, and the divisionis componentwise as well.

3.3. Modified ISTA algorithm for sparse coding. In the proposed sparse representationframework (15), (16), (17), it is obvious that the speed and accuracy of the proposed methodlargely depend on how subproblem (15) is solved over variables α and z. A natural approachis to use the alternating split Bregman method [24] (also called the ADM [1, 57]), in which(15) is solved by alternating minimization with respect to α and z, while keeping the othervariable fixed. In particular, for variable α, there is extensive literature in optimization andnumerical analysis that addresses it. To date, the most widely studied first-order methodfor solving (15) with respect to α is the iterated shrinkage/thresholding algorithm (ISTA)[9, 60]. It iteratively performs a gradient descent update followed by a soft thresholding. TheD

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 10: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

1698 Q. LIU, D. LIANG, Y. SONG, J. LUO, Y. ZHU, AND W. LI

advantage of the popular ISTA is its simplicity. However, ISTA has also been recognized as aslow method. Therefore, many alternative algorithms that speed up the performance of ISTAwere proposed, such as sparse reconstruction by separable approximate (SpaRSA)[53] and thefast iterated shrinkage/thresholding algorithm (FISTA)[5].

Here we use a modified ISTA strategy by taking advantage of the particular structure ofthe resulting problem. The whole iterative procedure reduces to a very simple and compactiterative fashion owing to the usage of the up-to-date ym; indexm denotes the iterative numberof the inner iteration of ISTA with respect to α. First, the minimization of (15) with respectto z can be computed analytically and z can be eliminated:

minz

{‖α‖1 +

λ

2‖z‖22 +

β

2‖z − (−Dα+Ru+ yk/β)‖22

}(22)

= ‖α‖1 +minz

{λ2‖z‖22 +

β

2‖z − (−Dα+Ru+ yk/β)‖22

}.

We obtain that z = βλ+β (−Dα + b + yk/β) from the second and third terms of (22).

Moreover, substituting this new z into (15) and (17) yields

αk+1 = argminα

Lβ(α) � ‖α‖1 +λβ

2(λ+ β)‖Dα−Ru− yk/β‖22,(23)

yk+1 = yk + β(−Dαk+1 − zk+1 +Ru) = βλ

λ+ β(−Dαk+1 +Ru+ yk/β).(24)

Second, we employ ISTA to update variable α, and at each inner iteration we use theimmediate variable ym to represent the updating scheme compactly. The ISTA applied to thestandard minimization functional with regard to α is rearranged as follows:

αm+1 = argminα

{‖−Dα+Ru+ yk/β‖22 +

2(λ+ β)

λβ‖α‖1

}.(25)

Let H(α) = ‖−Dα + Ru + yk/β‖22; we give ∇H(α) = 2DT (Dα − Ru − yk/β). Thenfollowing the standard ISTA algorithm [17, 60] yields

αm+1 = argminα

{γ‖α− [αm +DT (Ru+ yk/β −Dαm)/γ]‖22 +

2(λ+ β)

λβ‖α‖1

}(26)

= Shrink(αm + (λ+ β)DT ym/λγβ, (λ+ β)/λγβ

),

where γ ≥ eig(DTD) and

x = Shrink(g, μ) =

⎧⎪⎪⎪⎨⎪⎪⎪⎩g − μ, g ≥ μ,

0, −μ ≤ g < μ,

g + μ, g < −μ,

is the solution of x = argminx{‖x‖1 + 12μ‖x− g‖2}.

As can be seen, the variable z is omitted and implicitly updated in the iterative scheme(i.e., Zm+1 = β

λ+β (−DkΓk,m + Ru + Ck/β)) to save computational time. Meanwhile thevariable y is also accelerated in addition to the variable α. It is updated more times thanthat in standard AL algorithm, where variable y is updated only once in each outer iteration.Thus the whole iterative procedure reduces to a very simple and compact iterative fashion.D

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 11: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

AL-BASED DICTIONARY UPDATING FOR IMAGE DEBLURRING 1699

3.4. Algorithm convergence and its implementation. All in all, the ADMDU-DEB algo-rithm summarized in Algorithm 1 can be viewed as a modified block-coordinate minimizationalgorithm. As for its convergence, in the case of a fixed dictionary, our proposed method fallsinto the category of the classical AL scheme (solved by inexact ADM). In problem (11), bothθ1(u,Γ) =

∑l‖αl

‖1 + μ2‖Au− f‖22 and θ2(u,Γ) =

λ2

∑l‖Dα

l− R

lu‖22 are convex functionals,

and both D and Rlare continuous linear operators. As is well known and demonstrated by

numerical evidence in much of the literature [24, 60, 50], the convergence of the algorithm isalso guaranteed, and inexact minimizations can still be effective. However, in the case of agradually updated dictionary, because of the nonconvexity and nonlinearity of the problem,the convergence analysis is challenging, and the pursuit of a global solution remains an openissue.

In an earlier paper [47], we found that the AL and ADM have been applied to solve thematrix separation problem based on low-rank factorization. For a nonconvex problem theyprovided a weak convergence result for their algorithm; i.e., under mild conditions any limitpoint of the iteration sequence generated by the algorithm is a Karush–Kuhn–Tucker (KKT)point. In our work, for this new nonconvex problem we give a similar result with regard to theconvergence of Algorithm 1. It should be emphasized that although the following convergenceresult is far from satisfactory, it provides an assurance for the behavior of the algorithm.

The subgradients of the L′β(D,Γ, Z, u) are as follows:

∂zl

L′β(D,Γ, Z, u) =

∂zl

{λ2‖z

l‖22 +

β

2‖Dα

l+ z

l−R

lu− y

l/β‖22

}(27)

= λzl+ β(Dα

l+ z

l−R

lu− y

l/β) = 0,

∂DL′β(D,Γ, Z, u) =

∂D

{∑l

β

2‖Dα

l+ z

l−R

lu− y

l/β‖22

}(28)

= β∑l

(Dαl+ z

l−R

lu− y

l/β)αT

l= 0,

∂αl

L′β(D,Γ, Z, u) =

∂αl

{‖αl‖1 +

β

2‖Dα

l+ z

l−R

lu− y

l/β‖22

}(29)

= ∂(‖αl‖1) + βDT (Dα

l+ z

l−R

lu− y

l/β) = 0,

∂uL′β(D,Γ, Z, u) =

∂u

{ L∑l=1

β

2‖Dα

l+ z

l−R

lu− y

l/β‖22 +

μ

2‖Au− f‖22

}(30)

= − β

L∑l=1

RTl(Dα

l+ z

l−R

lu− y

l/β) + μAT (Au− f) = 0.

Additionally, by considering the constraint Rlu = Dα

l+ z

l, it is straightforward to obtain

the KKT conditions for (12) as follows:

λzl− y

l= 0; Y ΓT = 0; ∂(‖α

l‖1)−DT y

l= 0;(31)

L∑l=1

RTlyl+ μAT (Au− f) = 0; R

lu = Dα

l+ z

l.

Dow

nloa

ded

11/2

4/14

to 7

5.10

2.71

.33.

Red

istr

ibut

ion

subj

ect t

o SI

AM

lice

nse

or c

opyr

ight

; see

http

://w

ww

.sia

m.o

rg/jo

urna

ls/o

jsa.

php

Page 12: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

1700 Q. LIU, D. LIANG, Y. SONG, J. LUO, Y. ZHU, AND W. LI

Proposition 1. Let X � (D,Γ, Z, u) and {Xk}∞k=1 be generated by our algorithm ADMDU-DEB, and assume that {Xk}∞k=1 is bounded and limk→∞(Xk+1 − Xk) = 0; then any accu-mulation of {Xk}∞k=1 satisfies the KKT conditions (31). In particular, whenever {Xk}∞k=1

converges, it converges to a KKT point of (12).

Proof. First, since zk+1l

= βλ+β (−Dkαk

l+Ru+ yk

l/β), it follows that

zk+1l

− zkl=

β

λ+ β(−Dkαk

l+R

lu+ yk

l/β)− zk

l(32-1)

=β(−Dkαk

l+R

lu− zk

l) + (yk

l− λzk

l)

λ+ β.

Second,

Dk+1 −Dk = ζY k+1(Γk+1)T .(32-2)

Third, since (21) holds, we obtain

Fuk+1 − Fuk =μF (A)∗ ◦ F (f) + βF

(∑l R

Tl

(Dk+1αk+1

l+ zk+1

l− yk+1

l/β))

μF (A)∗ ◦ F (A) + βF∑

l RTlR

lF T

− Fuk(32-3)

={μF (A)∗ ◦ F (f) + βF

(∑l

RTl

(Dk+1αk+1

l+ zk+1

l− yk+1

l/β))− Fuk

(μF (A)∗

◦ F (A) + βF∑l

RTlR

lF T)}/{

μF (A)∗ ◦ F (A) + βF∑l

RTlR

lF T}

={−μF [AT (Au− f)]−F

∑l R

Tlyk+1l

}+ βF (∑

l RTl(Dk+1αk+1

l+ zk+1

l−R

luk))

μF (A)∗ ◦ F (A) + βF∑

l RTlR

lF T

.

Fourth, it follows that

αm+1l

− αml

= Shrink(αm

l+

λ+ β

λγβDT ym

l,λ+ β

λγβ

)− αm

l(32-4)

=

⎧⎪⎪⎪⎨⎪⎪⎪⎩αm

l+ λ+β

λγβ DT ym

l− λ+β

λγβ , αml+ λ+β

λγβ DT ym

l≥ λ+β

λγβ ,

0, −λ+βλγβ ≤ αm

l+ λ+β

λγβ DT yml < λ+β

λγβ

αml+ λ+β

λγβ DT ym

l+ λ+β

λγβ , αml+ λ+β

λγβ DT ym

l< −λ+β

λγβ

− αml,

=λ+ β

λγβ·

⎧⎪⎪⎪⎨⎪⎪⎪⎩DT ym

l− I, αm+1

l≥ 0,

0, αm+1l

= 0,

DT yml+ I, αm+1

l< 0

=λ+ β

λγβ· [DT ym

l− sign(αm+1

l)].

Dow

nloa

ded

11/2

4/14

to 7

5.10

2.71

.33.

Red

istr

ibut

ion

subj

ect t

o SI

AM

lice

nse

or c

opyr

ight

; see

http

://w

ww

.sia

m.o

rg/jo

urna

ls/o

jsa.

php

Page 13: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

AL-BASED DICTIONARY UPDATING FOR IMAGE DEBLURRING 1701

It is worth noting that the derivation of the third line in (32-4) follows from the fact thatin the definition of shrinkage

x = Shrink(g, μ) =

⎧⎪⎪⎪⎨⎪⎪⎪⎩g − μ, g ≥ μ,

0, −μ ≤ g < μ,

g + μ, g < −μ,

the interval satisfying g ≥ μ is equal to that satisfying x ≥ 0 and the interval satisfyingg < −μ is equal to that satisfying x < 0.

Finally, it follows that

yk+1l

− ykl= β(−Dαk+1

l− zk+1

l+R

luk+1), l = 1, 2, . . . , L.(32-5)

Hence limk→∞(Xk+1 −Xk) = 0 implies that both sides of (32) tend to zero as k goes toinfinity. Consequently,

(33) Rluk − (Dkαk

l + zkl) → 0, λzk

l− yk

l→ 0, Y k+1(Γk+1)T → 0,

DT yml− ∂(‖αm+1

l‖1) = DT ym

l− sign(αm+1

l) → 0,

L∑l=1

RTlykl+ μAT (Auk − f) → 0,

where the first limit in (33) is used to derive the other limits. That is, the sequence {Xk}∞k=1

asymptotically satisfies the KKT condition (31), from which the conclusions of Proposition 1follow readily. This completes the proof.

We employ the simple iterative solver in subsections 3.2 and 3.3 and obtain the detaileddescription of the ADMDU-DEB algorithm. The method is initialized with a simple estimateu0 = AT f . As the iteration proceeds, the recovered image uk will have more and more finestructures before reaching the stopping criterion. It is worth nothing that, just as we haveanalyzed in [31], the sparse coding may be accelerated by some other advanced method suchas FISTA, though we found that the improvement is very limited while the computationalperformance suffers. Sparse representation with dictionary learning is a large-scale and highlynonconvex problem. Since the number of training samples is very big for the dictionarylearning problem, a simple iteration formula is essential. Hence, we use the iterative scheme(26) because of its simple and compact formation.

In terms of the computational cost, our method ADMDU-DEB is analogous to ADM-TV; i.e., solving the underlying image uk+1 involves two FFTs. On the other hand, the onlydifference is that ADM-TV updates the auxiliary variable vi by shrinkage in the differencetransform, while our method updates the primal variable Γk,m+1 and dual variable Y m+1 in thepatch-based transform (see lines 4 and 5 in Algorithm 2). We expect the computation time ofthe patch-based sparse representation to decrease substantially with graphics processing unit(GPU) implementation.

The proposed method involves four parameters: parameter μ is the regularization pa-rameter that needs to be hand-tuned for best improvement in signal-to-noise ration (SNR)and peak signal-to-noise ration (PSNR). β is the Bregman (or AL) positive parameter, asanalyzed in the iterated refinement property of IRM-TV [37]. It has been proved that theD

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 14: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

1702 Q. LIU, D. LIANG, Y. SONG, J. LUO, Y. ZHU, AND W. LI

Algorithm 2 The detailed description of ADMDU-DEB.

1: Initialization: Γ0 = 0; C0 = 0; D0; u0; μ; β; λ and ζ2: while stop-criterion not satisfied(loop in k): do3: while stop-criterion not satisfied(loop in m): do4: Y m+1 = λβ

λ+β (−DkΓk,m +Ruk + Ck/β)

5: Γk,m+1 = Shrink(Γk,m + λ+β

λγβ (Dk)TY m+1, λ+β

λγβ

)6: end while7: Ck+1 = Y m+1; Γk+1,0 = Γk,m+1; Dk+1 = Dk + ζCk+1(Γk+1,0)T ;

dk+1q = dk+1

q /‖dk+1q ‖2, q = 1, 2, . . . , Q,

8: Obtain uk+1 using (21)9: end while

choice of Bregman parameter has little effect on the final reconstruction quality as long as itis sufficiently large. The bigger the value of the Bregman parameter is, the more iteration theBregman method needs to reach the stopping condition. λ stands for the sparse level of theimage patches and can be determined empirically. Finally, the stepsize of the dictionary up-dating stage ζ is set to be a small positive number like ζ = 0.01. In practical implementation,the iteration k runs until it satisfies ‖uk+1−uk‖2/‖uk+1‖2 ≤ ε. m loops for 2–5 iterations canobtain satisfied results; the determination of m will be discussed in section 5.

4. Extension to a nonnegative variable. It is important to highlight that the prioriinformation of the solution value constraint is very useful in image deblurring. Image valueswhich represent physical quantities such as photon count or energies are often nonnegative.The constraint of u ≥ 0 is physically meaningful in most cases (images acquired by digitalcameras, MRI, CT, etc.), and its enforcement has attracted some attention [30, 11, 18, 44,26]. For example, in applications such as biomedical imaging [38, 18], gamma ray spectralanalysis [44], astronomical imaging and spectroscopy [26], the physical characteristics of theproblem require the recovered data to be nonnegative. The authors stressed the importanceof nonnegativity in their models.

An intuitive approach for ensuring nonnegativity is to solve the unconstrained problemfirst, followed by setting the negative components of the resulting output to zero. However, thisapproach may result in the presence of spurious ripples in the reconstructed image. Choppingoff the negative values may also introduce patches of black color which could be visuallyunpleasant. To overcome this drawback, some efficient approaches were proposed in thepast few years. The nonnegatively constrained Chan–Golub–Mulet algorithm (NNCGM) [30]is a variant of the CGM method to handle the nonnegative constraint. TV regularizationwith bound constraints (TV-BC) [11] uses a splitting technique to decouple TV minimizationfrom enforcing the constraints because the nonnegativity constraint improves the quality ofthe reconstructions. More recently [42], an algorithm called iteratively reweighted norm-nonnegative quadratic programming (IRN-NQP) was proposed and owes its name to thederivation of the method: it starts by representing the �p and �q norms in its minimizationproblem by the equivalent weighted �2 norms, in the same fashion as the iteratively reweightednorm (IRN) algorithm (see [43]), and then casts the resulting weighted L2 functional as aD

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 15: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

AL-BASED DICTIONARY UPDATING FOR IMAGE DEBLURRING 1703

nonnegative quadratic programming problem (NQP; see [46]). Though the multiplicativeupdating mechanism makes the algorithm faster than NNCGM and TV-BC, the performancequality is almost the same.

In this section, by incorporating the prior information of the solution value, we extendproblem (11) as follows:

minu

(minΓ

∑l

‖αl‖1 +

λ

2‖Dα

l−R

lu‖22)+

μ

2‖Au− f‖22(34)

s.t. u ∈ [a, b],

where u ∈ [a, b] is meant pointwise (and similarly henceforth), and the interval may be un-bounded. The upper bound constraint may or may not be enforced and could be useful whena priori information about the upper bound is known. Specifically, with a = 0 and b = +∞,we have a nonnegativity constraint.

As in section 3, in order to realize its goal, we first convert the unconstrained problem intoan equivalent constrained optimization problem by variable splitting where auxiliary variablezltakes the place of linear transformations of one base variable (containing the component u

and αl) and v takes the place of u in the value constraint; hence problem (34) is equivalently

transformed to ∑l

(‖α

l‖1 +

λ

2‖z

l‖22)+

μ

2‖Au− f‖22 + �(a≤v≤b)(35)

s.t. Rlu = Dα

l+ z

l; u = v.

Then, we construct a corresponding AL function and minimize it alternately with re-spect to one auxiliary variable at a time, which decouples the minimization and simplifiesoptimization.

(36) (αk+1l

, zk+1l

, vk+1, uk+1) = arg minαl,z

l,v,u

L′β(Γ, Z, v, u), l = 1, 2, . . . , L,

(37) yk+1l

= ykl+ β(−Dαk+1

l− zk+1

l+R

luk+1), l = 1, 2, . . . , L,

(38) wk+1 = wk + (uk+1 − vk),

where L′β(Γ, Z, v, u) =

∑Ll=1 Lβ(αl

, zl, yk

l, u, v, wk) �

∑Ll=1‖αl

‖1+ λ2‖zl‖22+ β

2 ‖Dαl+z

l−R

lu−

yk/β‖22 + μ2‖Au− f‖22 + �(a≤v≤b) +

β2

2 ‖wk + u− v‖22 .The alternating minimization is applied to the AL function, resulting in the following

iterative scheme:

(αk+1, zk+1) = argminα,z

‖α‖1 +λ

2‖z‖22 +

β

2‖Dα+ z −R

luk − yk/β‖22,(39)

uk+1 = argminu

μ

2‖Au− f‖22 +

β

2‖Dαk+1 + zk+1 −R

lu− yk/β‖22(40)

+β22‖wk + u− vk‖22,

vk+1 = argminv

�(a≤v≤b) +β22‖wk + uk+1 − v‖22.(41)

Dow

nloa

ded

11/2

4/14

to 7

5.10

2.71

.33.

Red

istr

ibut

ion

subj

ect t

o SI

AM

lice

nse

or c

opyr

ight

; see

http

://w

ww

.sia

m.o

rg/jo

urna

ls/o

jsa.

php

Page 16: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

1704 Q. LIU, D. LIANG, Y. SONG, J. LUO, Y. ZHU, AND W. LI

In this case, the updating of image patches is the same as that in section 3, while theupdating of image solution uk+1 is changed by adding the quadratic penalty function of u andauxiliary variable v. In other words, the updating of uk+1 is implemented as follows:

uk+1

= F−1

(μF (A)∗◦F (f) + β2F (vk − wk) + βF

∑l R

Tl(Dk+1αk+1

l+ zk+1

l− yk+1

l/β)

μF (A)∗ ◦ F (A) + β2 + βF∑

l RTlR

lF T

).(42)

Meanwhile, the updating of vk+1 is implemented as follows:

vk+1 =

⎧⎪⎪⎪⎨⎪⎪⎪⎩b if wk + uk+1 ≥ b,

wk + uk+1 elseif a ≤ wk + uk+1 ≤ b,

a otherwise.

(43)

Additionally, the dictionary updating rule is the same as that implemented in ADMDU-DEB. Altogether, a complete description of the method NNADMDU-DEB is listed in Algo-rithm 3.

Algorithm 3 The detailed description of NNADMDU-DEB.

1: Initialization: Γ0 = 0; C0 = 0; D0; u0; μ; β; β2; λ; and ζ2: while stop-criterion not satisfied(loop in k): do3: while stop-criterion not satisfied(loop in m): do4: Y m+1 = λβ

λ+β (−DkΓk,m +Ruk + Ck/β)

5: Γk,m+1 = Shrink(Γk,m + λ+β

λγβ (Dk)TY m+1, λ+β

λγβ

)6: end while7: Ck+1 = Y m+1; Γk+1,0 = Γk,m+1; Dk+1 = Dk + ζCk+1(Γk+1,0)T ;

dk+1q = dk+1

q /‖dk+1q ‖2, q = 1, 2, . . . , Q,

8: Obtain uk+1 using (42)9: Obtain vk+1 using (43)

10: Obtain wk+1 by wk+1 = wk + (uk+1 − vk+1)11: end while

5. Experimental results. In this section, the convergence performance of the proposedmethod ADMDU-DEB is demonstrated in comparison to the ADM-TV, and it is observed thatboth methods possess a similar iteration property, under the AL framework. The comparisonwith the K-SVD based deblurring algorithm SR-IID [62] is also conducted. In addition,we compare our ADMDU-DEB with five current state-of-the-art methods: the constrainedTV deblurring method FISTA-TV [4], GPSR with wavelet [23], the l0-norm sparsity-baseddeblurring method L0-Abs [39], the BM3D deblurring method [14], and the ASDS method.Finally, the enhancement of the nonnegativity constraint employed in NNADMDU-DEB isillustrated by the comparison with NNCGM.D

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 17: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

AL-BASED DICTIONARY UPDATING FOR IMAGE DEBLURRING 1705

In the experiments, the nominal values of the various parameters were set as follows: patchsize

√M = 8, the overcompleteness of the dictionary K = 4 (correspondingly Q = 256), the

patch overlap r = 1. Thereby the number of data samples were L = 69619 for image size256× 256 and L = 269361 for image size 512× 512. ω = (

√M/r)2 = 64, β = 0.005, β2 = 10,

ζ = 0.01; λ was empirically chosen as λ = 12. We observe that the method still works well inthe interval λ ∈ [10, 20]. The regularization parameter μ in the method was hand-tuned forthe best improvement in PSNR; it can also be determined by an empirical equation like thatin [8]. The overcomplete discrete cosine transform (DCT) was chosen as the initial dictionary.

The quality of the reconstruction was measured by three quality indices: signal-to-noiseratio (SNR), which is defined by SNR(u) = 10 log10‖u0−u‖2/‖u0−u‖2, where u0 and u are theoriginal and restored image, respectively, and u is the mean intensity value of u0; the commonlyused peak signal-to-noise ratio (PSNR) with the formula as PSNR = 20 log10(255/‖u0 − u‖);and the structural similarity (SSIM) introduced in [52]. Because of the limitation of SNRand PSNR on capturing the subjective appearance of the results, the SSIM index intends tomeasure the perceptual quality of the images.

5.1. Comparisons with ADM-TV and the deblurring algorithm using K-SVD. In thissection, the proposed ADMDU-DEB was compared with ADM-TV [50] and the K-SVD-based deblurring algorithm SR-IID [62]. The comparison with ADM-TV will demonstrate thesuperiority of the patch-based sparse representation prior constraint, while the comparisonwith SR-IID will demonstrate the benefits from the AL and ADM techniques.

Since both ADM-TV and our method fall into the general framework of AL and ADM,their convergence properties of their iterative scheme are very similar. We used the image“Cameraman” blurred with the Gaussian kernel of standard deviation σ = 10 (hsize = 7) andadded Gaussian noise (mean zero and standard deviation std = 255 × 10−3) to illustrate theproperty. In the experiment the parameter μ of ADM-TV was set to be the default valueμ = 0.05 × 2552/std2; in their code, the reference images for the simulated experiments werenormalized to a maximum magnitude of 1. Meanwhile, parameter μ of ADMDE-DEB wasset to be μ = 9000, and we designed the number of inner iterations m = 1, 2, 3 to test therobustness of the method. Both ADM-TV and our method were terminated using the relativetolerance ‖uk+1 − uk‖2/‖uk+1‖2 ≤ ε with ε = 10−3.

The changing SNR and objective values as functions of iteration numbers are presented inFigure 1(a),(f). First, it can be seen that both ADM-TV in Figure 1(a),(d) and our ADMDE-DEB in Figure 1(c),(f) are able to reach a reasonable SNR in merely several iterations anddecrease the objective function rapidly. Moreover, our patch-based methodology substantiallyimproves the performance of AL-related algorithms in terms of SNR and visual quality. Dueto the piecewise smooth regularization property of the TV norm, ADM-TV is able to preserveedges in the image while it fails to retain fine details. Besides, the ADMDU-DEB gainsalmost 1dB more than the patch-based method with fixed dictionary, denoted as ADMNDU-DEB shown in Figure 1(b)(e), which demonstrates that the learned dictionary is adapted tothe particular image instance, thereby favoring better sparsities and consequently much higherSNR. The sparsity in ADMDU-DEB is enforced on overlapping image patches emphasizinglocal structure. The ADMDU-DEB also gains more 0.6dB than the patch-based method withglobal trained dictionary in [20]. Considering paper length limitations, we display only partsD

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 18: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

1706 Q. LIU, D. LIANG, Y. SONG, J. LUO, Y. ZHU, AND W. LI

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 1. SNR (top), function values (middle), and the deblurred results (bottom) of ADM-TV (left),ADMNDU-DEB (middle), and ADMDU-DEB(right).

of experimental results.

Second, the impact of parameter m on the method was investigated. As displayed inFigure 1(b),(c),(e),(f), solving the inner minimization of (15) with respect to {Γ, Z} inexactlyindeed affects the iterative property of the algorithm, while the quality of the recovered imageshas little relevance to the value of m. In particular, when m = 1, the objective functionvalues of both ADMNDU-DEB and ADMDU-DEB in Figure 1(e),(f) fluctuate at the firstfew iterations, although ADMDU-DEB converges faster than ADMNDU-DEB due to theD

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 19: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

AL-BASED DICTIONARY UPDATING FOR IMAGE DEBLURRING 1707

(a) (b) (c) (d) (e)

(f) (g) (h) (i) (j)

Figure 2. The intermediate learned dictionary and the corresponding recovery image at iterations1, 4, 9, 15, 22.

dictionary updating stage in ADMDU-DEB, which accelerates the convergence scheme. Onthe other hand, because all methods were stopped with the same setting ε = 10−3, it canbe observed from Figure 1(b),(c) that the quality of the recovered images is almost the sameregardless of whether m = 1, 2, or 3.

Finally, from the view of computational complexity, the computation times of our methodADMDU-DEB with m = 1, 2, 3 are 109.36s, 97.28s, and 113.69s, respectively. Meanwhile,the computation time of ADM-TV shown in Figure 1(a),(d) is 0.56s. The main bottleneckin our method is the patch-based sparse representation step. We expect the time to decreasesubstantially with conversion of the code to C/C++ and GPU implementation. To sum up,we set m = 2 in all numerical experiments in the remainder of this paper.

To better investigate the iterative property of the proposed algorithm, we plot in Figure2 the intermediate learned dictionary and the corresponding recovery image generated byADMDU-DEB at iterations k = 1, 4, 9, 15, 22. As shown in Figure 2, the dictionary sequencebecomes more and more “regular complex.” In particular, similarly as shown in Figure 1 (seeFigure 1(f) m = 2), the objective value decreases quickly at the first few iterations; ADMDU-DEB changes the initial dictionary drastically in the first few stages (e.g., k = 1, 2, 3, . . . , 9).We can observe that from 1 to 9 iterations most of the edge objective atoms are constructed,and furthermore from 10 to 22 iterations more and more details are added to the atoms.

To further compare our method with ADM-TV and SR-IID, we tested two types of blurringkernels from MATLAB, i.e., Gaussian and average. For each type of kernel, we operated thesethree algorithms with a set of kernel sizes. The additive noise used was Gaussian with meanzero and standard deviation std = 255× 10−3. Detailed information on the setup of differenttests is summarized in Table 2.

PSNR results of ADM-TV, SR-IID, and our ADMDU-DEB on four tests are shown inFigure 3. As can be seen, on one hand, the bigger the size of the blurring kernel is, the largerthe gap between ADM-TV and our ADMDU-DEB is. Particularly, in the case of hsize = 15,our method gains nearly 1dB over ADM-TV regardless of the blurring type. On the otherhand, as the size of blurring kernel gets bigger, the performance of SR-IID becomes relativelyD

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 20: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

1708 Q. LIU, D. LIANG, Y. SONG, J. LUO, Y. ZHU, AND W. LI

Table 2Information on setup for four tests.

Test no. Image Size Blurring type Blurring kernel parameters

1 Cameraman 256× 256 Gaussian hsize = {3, 5, . . . , 15}, σ = 10

2 Lena 512× 512 Gaussian hsize = {3, 5, . . . , 15}, σ = 10

3 Cameraman 256× 256 Average hsize = {3, 5, . . . , 15}4 Lena 512× 512 Average hsize = {3, 5, . . . , 15}

(a) (b)

(c) (d)

Figure 3. PSNR results of ADM-TV, SR-IID, and our method ADMDU-DEB on four tests described inTable 2.

better compared to ADM-TV but is still inferior to that of ADMDU-DEB. The recoveredresults by ADM-TV, SR-IID, and our method for Cameraman and Lena with Gaussian blur-ring kernel (the kernel size is 13) are shown in Figure 4. First, seen from Figure 4(d),(e),although both SR-IID and ADMDU-DEB perform better than ADM-TV in Figure 4(c), theimage restored by ADMDU-DEB has better visual quality. In particular, ADMDU-DEB wellpreserves the image edges and removes the noise without introducing too many artifacts. Thisphenomenon has already been observed in simple image denoising [32], where the differencebetween K-SVD-based and AL-based algorithms was indicated. Second, seen from Figure4(f),(g), the TV-based method ADM-TV is effective in suppressing the noise; however, itD

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 21: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

AL-BASED DICTIONARY UPDATING FOR IMAGE DEBLURRING 1709

(a) (b)

(c) (d) (e)

(f) (g) (h)

Figure 4. Recovered results for Cameraman and Lena with Gaussian blurring kernel (the blurring kernelsize is 13). In each row from left to right: (Top row) blurry and noisy image of Cameraman and Lena; (middleand bottom rows) results recovered by ADM-TV, SR-IID, and ADMDU-DEB.

produces oversmoothed results and eliminates many image details. The stripes on the hatand the hair in Lena displayed in Figure 4(f) are not recovered adequately. The result bySR-IID shown in Figure 4(g) also fails to recover the stripes, while our method is effective inreconstructing smooth image areas and fine structures like edges and details.

5.2. Comparisons with other methods. Since the earlier algorithms such as Wienerdeconvolution and Lucy Richardson deconvolution (the built-in MATLAB functions decon-vwnr.m and deconvlucy.m) perform worse than recently developed state-of-the-art methods,they were not included in our comparisons. We compared the proposed method with fiverecently proposed image deblurring methods: the constrained TV deblurring method FISTA-TV2 [4], GPSR with wavelet3 [23], the l0-norm sparsity-based deblurring method L0-Abs [39],

2http://iew3.technion.ac.il/∼becka/papers/tv fista.zip3http://www.lx.it.pt/∼mtf/GPSR/D

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 22: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

1710 Q. LIU, D. LIANG, Y. SONG, J. LUO, Y. ZHU, AND W. LI

Figure 5. Six test images (256 × 256).

the BM3D-based deblurring method BM3DDEB [14], and the ASDS method.4 It is worthnoting that there are two ASDS algorithms, named ASDS-TD1 and ASDS-TD2, with thedifference of using two different sets of training images [19]. We chose the ASDS-TD2 inour comparison (ASDS-TD2 performs a little better than ASDS-TD1). Since another twocomplementary regularization terms in the ASDS scheme, the autoregressive model-basedregularization and nonlocal similarity regularization, are not devoted to sparsity-promotingexplicitly, we did not include them in our comparison. In the following four experiments, twotypes of blur kernels, a Gaussian kernel of standard deviation σ = 3 and a 9 × 9 uniformkernel, were used to simulate blurred images. Additive Gaussian white noises with standarddeviations std =

√2 and std = 2 were then added to the blurred images, respectively. Figure

5 displays the six test images. For simplicity, we terminated algorithm ADMDU-DEB by arelative change of the restored image with ε = 10−3 in all these experiments.

The PSNR and SSIM results by the competing methods are compared in Tables 3–6. Onecan see that the latter three adaptively patch-based methods outperform the three methodswith prespecified transform like a wavelet or bounded variational model; this is because theyexploit the nonlocal redundancies in the image. For the experiments using a uniform blur ker-nel, the average PSNRs of ADMDU-DEB and BM3DDEB are almost the same. In particularfor test image Boats, our method gains 0.3 dB over BM3DDEB. For the experiments usingGaussian blur kernel, the PSNR gaps between all the competing methods become smaller, andthe average PSNRs of BM3DDEB, ASDS-TD2, and ADMDU-DEB are almost the same. Inother words, our method leads to satisfying quality and is very competitive with BM3DDEBand ASDS-TD2 if not always reaching the highest PSNR value.

4http://www4.comp.polyu.edu.hk/∼cslzhang/ASDS AReg.htmDow

nloa

ded

11/2

4/14

to 7

5.10

2.71

.33.

Red

istr

ibut

ion

subj

ect t

o SI

AM

lice

nse

or c

opyr

ight

; see

http

://w

ww

.sia

m.o

rg/jo

urna

ls/o

jsa.

php

Page 23: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

AL-BASED DICTIONARY UPDATING FOR IMAGE DEBLURRING 1711

Table 3The PSNR(dB) and SSIM results of deblurred images (uniform blur kernel, noise level std =

√2).

Image FISTA-TV GPSR-wavelet L0-Abs BM3DDEB ASDS-TD2 Our method

Baboon 21.16(0.5095) 21.01(0.4801) 21.21(0.5126) 21.46(0.5315) 21.45(0.5863) 21.46(0.5489)

Barbara 25.59(0.7373) 25.63(0.7462) 26.28(0.7671) 27.90(0.8171) 26.65(0.7709) 27.41(0.7892)

Boats 28.94(0.8331) 28.78(0.8164) 29.81(0.8496) 29.90(0.8528) 28.94(0.8039) 30.38(0.8591)

Cameraman 26.72(0.8330) 25.71(0.8052) 26.86(0.8361) 27.24(0.8308) 27.14(0.7836) 27.30(0.8258)

Pentagon 25.12(0.6835) 24.66(0.6371) 25.26(0.6830) 26.00(0.7210) 25.62(0.7290) 26.07(0.7283)

Peppers 28.44(0.8131) 27.78(0.8091) 28.75(0.8274) 28.70(0.8151) 28.25(0.7682) 28.83(0.8204)

Table 4The PSNR(dB) and SSIM results of deblurred images (uniform blur kernel, noise level std = 2).

Image FISTA-TV GPSR-wavelet L0-Abs BM3DDEB ASDS-TD2 Our method

Baboon 20.98(0.4965) 20.49(0.4014) 20.80(0.4498) 21.13(0.4932) 21.10(0.5429) 21.10(0.5106)

Barbara 25.12(0.7031) 24.78(0.6933) 25.37(0.7248) 27.16(0.7881) 26.35(0.7695) 26.79(0.7667)

Boats 27.85(0.7880) 27.45(0.7802) 28.75(0.8181) 29.19(0.8335) 28.83(0.8124) 29.49(0.8323)

Cameraman 26.04(0.7772) 24.77(0.7813) 25.96(0.8131) 26.53(0.8136) 26.81(0.8156) 26.58(0.8093)

Pentagon 24.59(0.6587) 23.82(0.5747) 24.54(0.6297) 25.46(0.6885) 25.31(0.7042) 25.41(0.6865)

Peppers 27.46(0.7660) 26.96(0.7818) 28.05(0.8106) 28.15(0.7999) 28.24(0.7904) 28.29(0.8084)

Table 5The PSNR(dB) and SSIM results of deblurred images (Gaussian blur kernel, noise level std =

√2).

Image FISTA-TV GPSR-wavelet L0-Abs BM3DDEB ASDS-TD2 Our method

Baboon 20.01(0.3396) 20.18(0.3524) 20.24(0.3673) 20.34(0.3923) 20.35(0.3889) 20.14(0.3892)

Barbara 23.22(0.5971) 23.56(0.6121) 23.71(0.6460) 23.77(0.6489) 23.81(0.6556) 23.83(0.6628)

Boats 25.53(0.7056) 26.09(0.7085) 26.64(0.7464) 26.99(0.7486) 27.14(0.7633) 27.22(0.7732)

Cameraman 23.26(0.7483) 23.00(0.7124) 23.51(0.7521) 23.77(0.7249) 23.90(0.7637) 23.89(0.7535)

Pentagon 22.48(0.4881) 23.05(0.5276) 23.39(0.5540) 23.82(0.5994) 23.88(0.5958) 23.65(0.5899)

Peppers 25.58(0.7411) 25.79(0.7246) 26.61(0.7843) 26.65(0.7626) 27.01(0.7900) 26.43(0.7898)

Table 6The PSNR(dB) and SSIM results of deblurred images (Gaussian blur kernel, noise level std = 2).

Image FISTA-TV GPSR-wavelet L0-Abs BM3DDEB ASDS-TD2 Our method

Baboon 20.00(0.3389) 20.02(0.3407) 20.17(0.3533) 20.28(0.3826) 20.28(0.3758) 20.05(0.3724)

Barbara 23.19(0.5933) 23.41(0.6101) 23.62(0.6351) 23.70(0.6399) 23.72(0.6464) 23.75(0.6532)

Boats 25.48(0.7032) 25.58(0.7017) 26.24(0.7292) 26.71(0.7359) 26.82(0.7488) 26.91(0.7598)

Cameraman 23.23(0.7465) 22.77(0.7025) 23.25(0.7412) 23.60(0.7198) 23.76(0.7568) 23.68 (0.7449)

Pentagon 22.46(0.4876) 23.02(0.5102) 23.13(0.5299) 23.65(0.5843) 23.69(0.5770) 23.43(0.5688)

Peppers 25.50(0.7373) 25.39(0.7425) 26.24(0.7723) 26.44(0.7555) 26.76(0.7800) 25.94 (0.7791)

Dow

nloa

ded

11/2

4/14

to 7

5.10

2.71

.33.

Red

istr

ibut

ion

subj

ect t

o SI

AM

lice

nse

or c

opyr

ight

; see

http

://w

ww

.sia

m.o

rg/jo

urna

ls/o

jsa.

php

Page 24: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

1712 Q. LIU, D. LIANG, Y. SONG, J. LUO, Y. ZHU, AND W. LI

(a) (b) (c)

(d) (e) (f)

Figure 6. Comparison of the deblurred images on Boats by different methods (uniform blur kernel andstd =

√2). Top row: FISTA-TV and GPSR-Wavelet, L0-Abs. Bottom row: BM3DDEB, method ASDS-TD2,

and our proposed method ADMDU-DEB.

Some visual comparisons are shown in Figures 6–7. It can be observed that there are someamplified noise and undesirable artifacts in the deblurred images by the wavelet shrinkagemethod. The TV-based method FISTA-TV prefers to estimate in a cartoon-like manner.The L0-Abs is very effective in reconstructing smooth image areas but fails to reconstruct fineimage edges. The BM3DDEB is very competitive in recovering the image structures. However,it tends to generate some “ghost” artifacts around the edges. It is interesting to mention thatboth L0-Abs and BM3DDEB are derived by �0-norm penalty and transform-domain shrinkage,and the main difference lies in the fact that one is point-based while the other is patch-based.The ASDS-TD2 generates many visually disturbing artifacts in the restored image. We arguethat the estimate errors potentially come from two aspects. One is that its subdictionariesare learned offline from a training dataset, which may be not efficient enough to representthe target image content. The other is that the dictionary learned procedure is reached by aK-means clustering algorithm and the PCA technique, which is prone to local minima. Ourmethod leads to the best visual quality. It not only removes the blurring effects and noise butalso reconstructs more and sharper image edges than other methods. Typically, as seen fromthe poles of Boats in Figure 6 restored by BM3DDEB, ASDS-TD2, and our ADMDU-TEB,there are far fewer artifacts near the edge in our estimate. This phenomenon was also observedin our preceding work on image denoising [32, 31]. We also compared the computation timeof our method and ASDS-TD2. The CPU times of ASDS-TD2 in Figures 6 and 7 are 631.68sand 608.63s, respectively, and the CPU times of our method are 107.93s and 99.36s. The CPUload of ASDS-TD2 is six times more than that of our method.

Dow

nloa

ded

11/2

4/14

to 7

5.10

2.71

.33.

Red

istr

ibut

ion

subj

ect t

o SI

AM

lice

nse

or c

opyr

ight

; see

http

://w

ww

.sia

m.o

rg/jo

urna

ls/o

jsa.

php

Page 25: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

AL-BASED DICTIONARY UPDATING FOR IMAGE DEBLURRING 1713

(a) (b) (c)

(d) (e) (f)

Figure 7. Comparison of the deblurred images on Peppers by different methods (uniform blur kernel andstd = 2). Top row: FISTA-TV and GPSR-Wavelet, L0-Abs. Bottom row: BM3DDEB, method ASDS-TD2,and our proposed method ADMDU-DEB.

5.3. Comparisons with bound constraint. In this subsection, our deblurring method wascompared with a leading total variation method, NNCGM, by Krishnan et al. Other bound-constrained TV methods reviewed in section 4 offer almost the same SNR and SSIM indexvalues as NNCGM and hence were not included in our comparisons. The test images are theSatellite (128 × 128 pixel), Cameraman, and Pentagon (256× 256 pixel) images.

The Satellite image was used for the deconvolution case and was blurred by a 5 × 5Gaussian kernel to match one of the experiments described in [30] (see page 2768 in [30]) andthen corrupted with Gaussian noise, giving a resulting image with a PSNR of 24.05dB. Wecompared our result with the primal-dual algorithm NNCGM using their code [30]. NNCGMgains 27.61 dB while our approach obtains 29.65 dB, with a substantial improvement. Itcan be observed from Figure 8 that the output of our method NNADMDU-DEB in Figure8(c),(f) has fewer spurious oscillations than that of NNCGM in Figure 8(b),(e), which resultsin noticeable reconstruction artifacts.

Integrating the nonnegativity constraint into our AL framework can promote more propersparse representation and consequently gain better restoration results. To better under-stand the contribution of the nonnegativity constraint, we compared our extended methodNNADMDU-DEB with the basic ADMDU-DEB at the same experiment setting, i.e., the thirdexperiment for image Cameraman and the fourth experiment for image Pentagon in subsection5.2. For the former experiment, the PSNR/SSIM of ADMDU-DEB is 23.87/0.7533 while thatof NNADMDU-DEB is 24.03/0.7536. For the latter experiment, the PSNR/SSIM of ADMDU-DEB is 23.33/0.5678 while that of NNADMDU-DEB is 23.40/0.5829. The PSNR/S-SIM values of NNADMDU-DEB are higher than those of ADMDU-DEB, which implies theD

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 26: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

1714 Q. LIU, D. LIANG, Y. SONG, J. LUO, Y. ZHU, AND W. LI

(a) (b) (c)

(d) (e) (f)

Figure 8. Comparison of the deblurred images on Satellite by different methods. Top row: Original,NNCGM method, and our method. Bottom row: Degraded, the restoration error magnitudes of NNCGM, andour proposed method NNADMDU-DEB.

possibility of achieving better sparse representation by exploiting such a nonnegativity con-straint. As shown in Figure 9, both deblurred results in Figures 9(c) and 9(f) recover morefine details than those in Figures 9(b) and 9(e). Moreover, the corresponding learned dic-tionaries of the restored images in Figures 9(b) and 9(c) are presented in Figures 10(a) and10(b), and it can be easily observed that more atoms in NNADMDU-DEB are updated thanin ADMDU-DEB.

6. Conclusions. In this paper, we present a new algorithm for bound constraint dictionarylearning regularization. The proposed algorithm is based on the AL scheme and the ADM;i.e., it uses a splitting approach to decouple the dictionary learning sparse representationminimization from enforcing the data-fidelity and value constraints. Consequently, the patch-based sparse representation solvers and the solution of the image itself can be employed withalternate minimization.

In comparison with its predecessors, the proposed method shares the follow characteris-tics. (i) The modified ISTA used in sparse coding and simple gradient descent in dictionaryupdating enable the whole algorithm to converge at a relatively small number of iterations.Typically, it runs several times faster than the recently presented patch-based methods. (ii)The method ADMDU-DEB can be naturally extended to NNADMDU-DEB by incorporatingthe nonnegativity constraint; better results are obtained by enforcing the constraint duringthe solution process itself. Numerical experiments show that the proposed method can effec-tively reconstruct the image details, outperforming many state-of-the-art deblurring methodsin terms of PSNR, SSIM, and visual perception.D

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 27: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

AL-BASED DICTIONARY UPDATING FOR IMAGE DEBLURRING 1715

(a) (b) (c)

(d) (e) (f)

Figure 9. Comparison of ADMDU-DEB and NNADMDU-DEB under the experiment setting of the thirdexperiment for Cameraman and the fourth experiment for Pentagon in subsection 5.2. (a),(d) The blurred andnoisy image; (b),(e) the image restored by ADMDU-DEB; (c),(f) the image restored by NNADMDU-DEB.

(a) (b)

Figure 10. The learned dictionaries corresponding to the restored images in Figure 9(b) and (c), respectively.

The main contribution of this paper is to apply AL and ADM to the sparse representationbased deconvolution problem with additive Gaussian noise. Moreover, like ADM-TV and itsvariants [59, 58], our approaches ADMDU-DEB and NNADMDU-DEB can be extended ormodified to apply to both single- and multichannel images with either Gaussian or impulsivenoise, and they permit cross-channel blurs when the underlying image has more than onechannel. They can also be further modified to tackle general linear inverse problems like MRIreconstruction and compressed sensing [59, 57], which will be topics of future work.D

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 28: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

1716 Q. LIU, D. LIANG, Y. SONG, J. LUO, Y. ZHU, AND W. LI

REFERENCES

[1] M. Afonso, J. Bioucas-Dias, and M. Figueiredo, Fast image recovery using variable splitting andconstrained optimization, IEEE Trans. Image Process., 19 (2010), pp. 2345–2356.

[2] M. Afonso, J. B.-Dias, and M. Figueiredo, An augmented Lagrangian approach to the constrained op-timization formulation of imaging inverse problems, IEEE Trans. Image Process., 20 (2011), pp. 681–695.

[3] M. Aharon, M. Elad, and A. Bruchstein, K-SVD: An algorithm for designing overcomplete dictio-naries for sparse representation, IEEE Trans. Signal Process., 54 (2006), pp. 4311–4322.

[4] A. Beck and M. Teboulle, Fast gradient-based algorithms for constrained total variation image de-noising and deblurring problems, IEEE Trans. Image Process., 18 (2009), pp. 2419–2434.

[5] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems,SIAM J. Imaging Sci., 2 (2009), pp. 183–202.

[6] J. Bioucas-Dias and M. Figueiredo, A new twist: Two-step iterative shrinkage/thresholding algorithmsfor image restoration, IEEE Trans. Image Process., 16 (2007), pp. 2992–3004.

[7] A. Buades, B. Coll, and J. M. Morel, A non-local algorithm for image denoising, in Proceedings ofthe IEEE Computer Vision and Pattern Recognition, Vol. 2, IEEE, Washington, DC, 2005, pp. 60–65.

[8] M. Burger, S. Osher, J. Xu, and G. Gilboa, Nonlinear inverse scale space methods for image restora-tion, in Variational, Geometric, and Level Set Methods in Computer Vision, Springer, New York, 2005,pp. 25–36.

[9] S. H. Chan, R. Khoshabeh, K. B. Gibson, P. E. Gill, and T. Q. Nguyen, An augmented Lagrangianmethod for total variation video restoration, IEEE Trans. Image Process., 20 (2011), pp. 3097–3111.

[10] G. Chantas, N. P. Galatsanos, R. Molina, and A. K. Katsaggelos, Variational Bayesian imagerestoration with a product of spatially weighted total variation image priors, IEEE Trans. ImageProcess., 19 (2010), pp. 351–362.

[11] R. Chartrand and B. Wohlberg, Total-variation regularization with bound constraints, in Proceedingsof the IEEE Acoustics Speech and Signal Processing Conference, IEEE, Washington, DC, 2010,pp. 766–769.

[12] F. Couzinie-Devy, J. Mairal, F. Bach, and J. Ponce, Dictionary Learning for Deblurring and DigitalZoom, arXiv preprint, arXiv:1110.0957, 2011.

[13] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, Image denoising by sparse 3-d transform-domain collaborative filtering, IEEE Trans. Image Process., 16 (2007), pp. 2080–2095.

[14] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, Image restoration by sparse 3d transform-domain collaborative filtering, in Proceedings of the Electronic Imaging Conference, InternationalSociety for Optics and Photonics, SPIE, Bellingham, WA, 2008, p. 681207.

[15] J. Dahl, P. C. Hansen, S. H. Jensen, and T. L. Jensen, Algorithms and software for total variationimage reconstruction via first-order methods, Numer. Algorithms, 53 (2010), pp. 67–92.

[16] A. Danielyan, V. Katkovnik, and K. Egiazarian, Bm3d frames and variational image deblurring,IEEE Trans. Image Process., 21 (2012), pp. 1715–1728.

[17] I. Daubechies, M. Defrise, and C. De Mol, An iterative thresholding algorithm for linear inverseproblems with a sparsity constraint, Comm. Pure Appl. Math., 57 (2004), pp. 1413–1457.

[18] N. Dey, L. Blanc-Feraud, C. Zimmer, Z. Kam, J. Olivo-Marin, and J. Zerubia, A deconvolu-tion method for confocal microscopy with total variation regularization, in Proceedings of the IEEESymposium Biomedical Imaging: Nano to Macro, IEEE, Washington, DC, 2004, pp. 1223–1226.

[19] W. Dong, L. Zhang, G. Shi, and X. Wu, Image deblurring and super-resolution by adaptive sparsedomain selection and adaptive regularization, IEEE Trans. Image Process., 20 (2011), pp. 1838–1857.

[20] M. Elad and M. Aharon, Image denoising via sparse and redundant representations over learneddictionaries, IEEE Trans. Image Process., 15 (2006), pp. 3736–3745.

[21] K. Engan, S. Aase, and J. H. Husoy, Method of Optimal Directions for Frame Design, in Proceedingsof the IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 5, IEEE,Washington, DC, 1999, pp. 2443–2446.

[22] E. Esser, Applications of Lagrangian-Based Alternating Direction Methods and Connections to SplitBregman, CAM report 9, UCLA, Los Angeles, CA, 2009.D

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 29: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

AL-BASED DICTIONARY UPDATING FOR IMAGE DEBLURRING 1717

[23] M. A. Figueiredo, R. Nowak, and S. J. Wright, Gradient projection for sparse reconstruction: Ap-plication to compressed sensing and other inverse problems, IEEE J. Selected Topics Signal Process.,1 (2007), pp. 586–597.

[24] T. Goldstein and S. Osher, The split Bregman method for L1-regularized problems, SIAM J. ImagingSci., 2 (2009), pp. 323–343.

[25] R. Gonzalez and R. Woods, Digital Image Processing, 3rd ed., Prentice–Hall, Englewood Cliffs, NJ,2007.

[26] K. Ho, C. Beling, S. Fung, K. Chong, M. Ng, and A. Yip, Deconvolution of coincidence Dopplerbroadening spectra using iterative projected newton method with non-negativity constraints, Rev. Sci.Instruments, 74 (2003), pp. 4779–4787.

[27] Z. Hu, J.-B. Huang, and M.-H. Yang, Single image deblurring with adaptive dictionary learning, inProceedings of the 17th IEEE International Conference on Image Processing, IEEE, Washington, DC,2010, pp. 1169–1172.

[28] Y. Huang, M. K. Ng, and Y.-W. Wen, A fast total variation minimization method for image restora-tion, Multiscale Model. Simul., 7 (2008), pp. 774–795.

[29] S. Kindermann, S. Osher, and P. W. Jones, Deblurring and denoising of images by nonlocal func-tionals, Multiscale Model. Simul., 4 (2005), pp. 1091–1115.

[30] D. Krishnan, P. Lin, and A. Yip, A primal-dual active-set method for non-negativity constrained totalvariation deblurring problems, IEEE Trans. Image Process., 16 (2007), pp. 2766–2777.

[31] Q. Liu, J. Luo, S. Wang, M. Xiao, and M. Ye, An augmented Lagrangian multi-scale dictionarylearning algorithm, EURASIP J. Adv. Signal Process., 2011 (2011), pp. 1–16.

[32] Q. Liu, S. Wang, and J. Luo, A novel predual dictionary learning algorithm, J. Vis. Commun. ImageRepresent., 23 (2012), pp. 182–193.

[33] Y. Lou, A. Bertozzi, and S. Soatto, Direct sparse deblurring, J. Math. Imaging Vision, 39 (2011),pp. 1–12.

[34] F. Malgouyres and T. Zeng, A predual proximal point algorithm solving a non negative basis pursuitdenoising model, Internat. J. Comput. Vis., 83 (2009), pp. 294–311.

[35] R. Neelamani, H. Choi, and R. Baraniuk, Forward: Fourier-wavelet regularized deconvolution forill-conditioned systems, IEEE Trans. Signal Process., 52 (2004), pp. 418–433.

[36] Y. Nesterov, Smooth minimization of non-smooth functions, Math. Program., 103 (2005), pp. 127–152.[37] S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, An iterative regularization method for total

variation-based image restoration, Multiscale Model. Simul., 4 (2005), pp. 460–489.[38] M. Persson, D. Bone, and H. Elmqvist, Total variation norm for three-dimensional iterative recon-

struction in limited view angle tomography, Phys. Med. Biol., 46 (2001), p. 853.[39] J. Portilla, Image restoration through �0 analysis-based sparse optimization in tight frames, in Proceed-

ings of the 16th IEEE International Conference on Image Processing, IEEE, Washington, DC, 2009,pp. 3909–3912.

[40] W. Richardson, Bayesian-based iterative method of image restoration, J. Optim. Soc. Amer., 62 (1972),pp. 55–59.

[41] R. T. Rockafellar, Augmented Lagrangians and applications of the proximal point algorithm in convexprogramming, Math. Oper. Res., 1 (1976), pp. 97–116.

[42] P. Rodrıguez, Multipicative updates algorithm to minimize the generalized total variation functionalwith a non-negativity constraint, in Proceedings of the 17th IEEE International Conference on ImageProcessing, IEEE, Washington, DC, 2010, pp. 2509–2512.

[43] P. Rodrıguez and B. Wohlberg, Efficient minimization method for a generalized total variation func-tional, IEEE Trans. Image Process., 18 (2009), pp. 322–332.

[44] R. W. Schafer, R. M. Mersereau, and M. A. Richards, Constrained iterative restoration algorithms,Proc. IEEE, 69 (1981), pp. 432–450.

[45] S. Setzer, G. Steidl, and T. Teuber, Deblurring Poissonian images by split Bregman techniques, J.Vis. Commun. Image Represent., 21 (2010), pp. 193–199.

[46] F. Sha, Y. Lin, L. Saul, and D. Lee, Multiplicative updates for nonnegative quadratic programming,Neural Comput., 19 (2007), pp. 2004–2031.

[47] Y. Shen, Z. Wen, and Y. Zhang, Augmented Lagrangian alternating direction method for matrixseparation based on low-rank factorization, Optim. Methods Software, (2012), pp. 1–25.D

ownl

oade

d 11

/24/

14 to

75.

102.

71.3

3. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 30: Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

1718 Q. LIU, D. LIANG, Y. SONG, J. LUO, Y. ZHU, AND W. LI

[48] K. Skretting and K. Engan, Recursive least squares dictionary learning algorithm, IEEE Trans. SignalProcess., 58 (2010), pp. 2121–2130.

[49] G. Steidl and T. Teuber, Removing multiplicative noise by Douglas–Rachford splitting methods, J.Math. Imaging Vis., 36 (2010), pp. 168–184.

[50] M. Tao, J. Yang, and B. He, Alternating Direction Algorithms for Total Variation Deconvolution inImage Reconstruction, TR0918, Department of Mathematics, Nanjing University, Nanjing, Jiangsu,China, 2009.

[51] Y. Wang, J. Yang, W. Yin, and Y. Zhang, A new alternating minimization algorithm for totalvariation image reconstruction, SIAM J. Imaging Sci., 1 (2008), pp. 248–272.

[52] Z. Wang, A. C. Bovik, H. R.ahim Sheikh, and E. P. Simoncelli, Image quality assessment: Fromerror visibility to structural similarity, IEEE Trans. Image Process., 13 (2004), pp. 600–612.

[53] S. Wright, R. Nowak, and M. Figueiredo, Sparse reconstruction by separable approximation, IEEETrans. Signal Process., 57 (2009), pp. 2479–2493.

[54] C. Wu and X. Tai, Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF,vectorial TV, and high order models, SIAM J. Imaging Sci., 3 (2010), pp. 300–339.

[55] L. Xiao, L.-L. Huang, and Z.-H. Wei, A Weberized total variation regularization-based image multi-plicative noise removal algorithm, EURASIP J. Adv. Signal Process., 2010 (2010), p. 38.

[56] J. Yang, J. Wright, T. Huang, and Y. Ma, Image super-resolution via sparse representation, IEEETrans. Image Process., 19 (2010), pp. 2861–2873.

[57] J. Yang and Y. Zhang, Alternating direction algorithms for �1-problems in compressive sensing, SIAMJ. Sci. Comput., 33 (2011), pp. 250–278.

[58] J. Yang, Y. Zhang, and W. Yin, An efficient TVL1 algorithm for deblurring multichannel imagescorrupted by impulsive noise, SIAM J. Sci. Comput., 31 (2009), pp. 2842–2865.

[59] J. Yang, Y. Zhang, and W. Yin, A fast alternating direction method for tvL1-L2 signal reconstructionfrom partial fourier data, IEEE Trans. Selected Topics Signal Process., 4 (2010), pp. 288–297.

[60] W. Yin, S. Osher, D. Goldfarb, and J. Darbon, Bregman iterative algorithms for �1-minimizationwith applications to compressed sensing, SIAM J. Imaging Sci., 1 (2008), pp. 143–168.

[61] H. Zhang, J. Yang, Y. Zhang, and T. Huang, Sparse representation based blind image deblurring, inProceedings of the IEEE International Conference on Multimedia and Expo (ICME), IEEE, Wash-ington, DC, 2011, pp. 1–6.

[62] H. Zhang and Y. Zhang, Sparse representation based iterative incremental image deblurring, in Pro-ceedings of the 16th IEEE International Conference on Image Processing, IEEE, Washington, DC,2009, pp. 1293–1296.

[63] X. Zhang, M. Burger, X. Bresson, and S. Osher, Bregmanized nonlocal regularization for deconvo-lution and sparse reconstruction, SIAM J. Imaging Sci., 3 (2010), pp. 253–276.

Dow

nloa

ded

11/2

4/14

to 7

5.10

2.71

.33.

Red

istr

ibut

ion

subj

ect t

o SI

AM

lice

nse

or c

opyr

ight

; see

http

://w

ww

.sia

m.o

rg/jo

urna

ls/o

jsa.

php