Adaptive Sparse Channel ation Re- Weighted Zero … · dable, accurat e receiver for e approaches)....

6
A key Sta app is o be by zer alg spa the coe me alg me con K est NL ma sys fad is n On est Fig (LM alg and alg on and sta [3] ver Ad Weig Abstract—Accu y technical is andard norma plied to adapti often describe exploited and adaptive spa ro-attracting gorithm. How arsity efficient e reason why efficients unifo ethod using re gorithm. Simu ethod achieve nventional one Keywords—nor timation (ASC LMF). Broadband ainstream tech stems. Due t ding is unavoi necessary at th ne of effectiv timation (ACE g. 1. It is we MF) algorithm gorithm in ach d steady-stat gorithm is uns n the following d weight initia able normalize ]. Recently, m rified that bro daptive ghted Z urate channel ssues in broa alized least me ive channel est d by sparse ch then estimatio arse channel e normalized l wever, this alg tly. By virtual -norm spa ormly. In this e-weighted zero ulation results es better est e. rmalized LMF CE), re-weighte I. IN signal transm hniques in the to the fact th idable, accurat he receiver for ve approaches E). A typical ell known tha m outperform hieving a goo te performan stable due to t g three factors alization [2]. T ed LMF (NLM many channel oadband chann e Spar Zero-A Guan D {gui, mehbo estimation pr adband wirele an fourth (NL timation (ACE hannel model, on performanc estimation (AS least mean f gorithm cann of geometrical arse constrain paper, we pro o-attracting N show that th imation perfo (NLMF), ada ed zero-attrac NTRODUCTION mission is bec e next generat hat frequency te channel stat r adaptive coh s is adopting framework of at ACE using ms the least m od balance be nces. Howeve the fact that i : input signal p To improve th MF) algorithm measurement nels often exhi rse Ch Attrac Gui, Abolfazl Department of Graduate T d}@mobile.e roblem is one ess communica LMF) algorithm E). Since the ch such sparsity e could be imp SCE) methods fourth (ZA-N not exploit ch l figures, we e t penalizes ch opose a novel NLMF (RZA-N he proposed ormance than aptive sparse ch cting NLMF coming one o tion communic y-selective ch te information herent detectio g adaptive ch f ACE is sho g least mean f mean square ( tween conver er, standard ts stability de power, noise p he stability of m was propos t experiments ibit sparse stru hannel cting N Fourth l Mehbodniya f Communicat e School of En Tohoku Univer Sendai, Japa cei.tohoku.ac. of the ations. m was hannel could proved using NLMF) hannel xplain hannel ASCE NLMF) ASCE n the hannel (RZA- of the cation hannel n (CSI) on [1]. hannel wn in fourth LMS) rgence LMF epends power LMF, sed in s have ucture as s of a Unf the the T esti algo AS (be pro the dire [3], NL pre bas Fi Estim Norma h and Fumiyuk tion Engineeri ngineering, rsity an .jp, adachi@ec shown in Fig. a very few cha fortunately, A inherent spar estimation pe To estimate imation (ASC orithm (ASCE CE-LMF met low 5dB). Ba oposed a norm stability, but ectly in sparse , we propose LMF) algorith sented a deta sed sparse c ig. 1. ASCE for b mation alized ki Adachi ing cei.tohoku.ac. 2. In other wo annel coefficie ACE using NL rse structure i erformance. such a chan CE) methods E-LMF) were thod is not sta ased on the den malized LMF t stable NLM e channel estim ed ASCE usin m [5]. From ailed explanat channel estim broadband commu Using Least jp ords, sparse ch ents and most LMF algorithm information an nnel, adaptiv using sparse e proposed in able except in nse channel m (NLMF) algo MF algorithm mation. Based ng zero-attrac a geometrica tion of -no mation. Since unication systems g Re- Mean hannel is cons t of them are z m always neg nd it may deg ve sparse cha least mean f [4]. However n low SNR re model, author i orithm to imp cannot be ap d on the meth cting NLMF al perspective orm zero-attra e -norm . n sisted zeros. glects grade annel fourth r, the egime in [3] prove pplied hod in (ZA- e, we acting zero-

Transcript of Adaptive Sparse Channel ation Re- Weighted Zero … · dable, accurat e receiver for e approaches)....

Page 1: Adaptive Sparse Channel ation Re- Weighted Zero … · dable, accurat e receiver for e approaches). ... B. sta me cau uti LA con fun wh the up reases, the st ... lain adaptiv lgorithm

AkeyStaappis obe by zeralgspathecoemealgmecon

KestNL

masysfadis nOnestFig(LMalgandalgon andsta[3]ver

Ad

Weig

Abstract—Accuy technical isandard normaplied to adaptioften describeexploited and adaptive spa

ro-attracting gorithm. Howarsity efficiente reason why efficients unifoethod using regorithm. Simuethod achievenventional one

Keywords—nortimation (ASCLMF).

Broadband ainstream techstems. Due tding is unavoinecessary at thne of effectivtimation (ACEg. 1. It is weMF) algorithmgorithm in achd steady-statgorithm is unsn the followingd weight initia

able normalize]. Recently, mrified that bro

daptive

ghted Z

urate channel ssues in broaalized least meive channel estd by sparse chthen estimatio

arse channel enormalized l

wever, this algtly. By virtual

-norm spaormly. In this e-weighted zeroulation results es better este.

rmalized LMF CE), re-weighte

I. IN

signal transmhniques in theto the fact thidable, accurathe receiver forve approachesE). A typical ell known tham outperformhieving a goote performanstable due to tg three factorsalization [2]. Ted LMF (NLMmany channel oadband chann

e Spar

Zero-A

Guan D

{gui, mehbo

estimation pradband wirelean fourth (NLtimation (ACEhannel model, on performancestimation (ASleast mean fgorithm cannof geometrical

arse constrainpaper, we proo-attracting N

show that thimation perfo

F (NLMF), adapted zero-attrac

NTRODUCTION mission is bece next generathat frequencyte channel statr adaptive cohs is adoptingframework of

at ACE usingms the least mod balance bences. Howevethe fact that i: input signal pTo improve thMF) algorithmmeasurement

nels often exhi

rse Ch

Attrac

Gui, AbolfazlDepartment of

GraduateTd}@mobile.e

roblem is one ess communicaLMF) algorithmE). Since the ch

such sparsity e could be imp

SCE) methods fourth (ZA-Nnot exploit chl figures, we et penalizes ch

opose a novel NLMF (RZA-N

he proposedormance than

aptive sparse chcting NLMF

coming one otion communicy-selective chte informationherent detectiog adaptive chf ACE is sho

g least mean fmean square (

tween converer, standard ts stability depower, noise p

he stability ofm was propost experimentsibit sparse stru

hannel

cting N

Fourth

l Mehbodniyaf Communicate School of En

Tohoku UniverSendai, Japa

cei.tohoku.ac.

of the ations. m was hannel

could proved

using NLMF) hannel xplain hannel ASCE

NLMF) ASCE n the

hannel (RZA-

of the cation hannel n (CSI) on [1]. hannel wn in fourth LMS)

rgence LMF

epends power LMF, sed in s have ucture

as sof aUnfthe the

TestialgoAS(beprothe dire[3],NLprebas

Fi

Estim

Norma

h

and Fumiyuktion Engineeringineering, rsity an .jp, adachi@ec

shown in Fig. a very few chafortunately, Ainherent sparestimation pe

To estimate imation (ASCorithm (ASCECE-LMF metlow 5dB). Ba

oposed a normstability, but

ectly in sparse, we propose

LMF) algorithsented a deta

sed sparse c

ig. 1. ASCE for b

mation

alized

ki Adachi ing

cei.tohoku.ac.j

2. In other woannel coefficie

ACE using NLrse structure ierformance.

such a chanCE) methods E-LMF) werethod is not sta

ased on the denmalized LMF t stable NLMe channel estimed ASCE usinm [5]. From ailed explanatchannel estim

broadband commu

Using

Least

jp

ords, sparse chents and mostLMF algorithminformation an

nnel, adaptivusing sparse

e proposed in able except innse channel m(NLMF) algo

MF algorithm mation. Basedng zero-attrac

a geometrication of ℓ -no

mation. Since

unication systems

g Re-

Mean

hannel is const of them are zm always negnd it may deg

ve sparse chaleast mean f[4]. However

n low SNR remodel, author iorithm to impcannot be ap

d on the methcting NLMF al perspective

orm zero-attrae ℓ -norm

.

n

sisted zeros. glects grade

annel

fourth r, the egime in [3] prove pplied hod in

(ZA-e, we acting zero-

Page 2: Adaptive Sparse Channel ation Re- Weighted Zero … · dable, accurat e receiver for e approaches). ... B. sta me cau uti LA con fun wh the up reases, the st ... lain adaptiv lgorithm

attracting introduces a uniform sparse constraint on different magnitudes of channel coefficients, the performance of ZA-NLMF degrades.

Inspired by re-weighted ℓ -norm minimization algorithm in [6], in this paper, we propose an improved ASCE method using re-weighted zero-attracting NLMF (RZA-NLMF) algorithm. Unlike the ZA-NLMF algorithm [5], RZA-NLMF algorithm can penalize different magnitudes of channel coefficients with different sparse constraint strength. The effectiveness of our proposed method is confirmed by computer simulation.

The remainder of this paper is organized as follows. A system model is described and standard LMF and NLMF algorithms are introduced in Section II. In section III, sparse ASCE using ZA-NLMF algorithm is introduced and improved ACSE using RZA-NLMF algorithm is highlighted. Computer simulations are presented in Section IV in order to evaluate and compare performances of the proposed ASCE methods. Finally, we conclude the paper in Section V.

II. SYSTEM MODEL AND STANDARD LMF ALGORITHM

Consider a baseband frequency-selective fading wireless communication system where FIR sparse channel vector = [ℎ , ℎ , … , ℎ ] is -length and it is supported only by

nonzero channel taps. Assume that an input training signal ( ) is input to probe the unknown sparse channel. At the receiver side, observed signal ( ) is given by ( ) = ( ) + ( ), (1)

where ( ) = [ ( ), ( − 1), … , ( − + 1)] denotes the vector of training signal ( ) , and ( ) is the additive white Gaussian noise (AWGN) assumed to be independent with ( ). The objective of ASCE is to adaptively estimate the unknown sparse channel estimator using the training signal ( ) and the observed signal ( ). According to [2], we can apply standard LMF algorithm to adaptive channel estimation, with the cost function ( ) = 14 ( ), (2)

where ( ) = ( ) − ( ) ( ) is -th adaptive updated error. The update equation of the filter can be written as ( + 1) = ( ) + ∂ ( )∂ (n) = ( ) + ( ) ( ), (3)

where denotes the step-size of gradient descend. Unfortunately, channel estimation using standard LMF algorithm is not stable in adaptive updating process and hence it cannot be employed directly [2]. To improve the reliability of LMF, an stable LMF algorithms was proposed by virtual of normalization in [3], which was termed as normalized LMF (NLMF) algorithm. The update equation is given by ( + 1) = ( ) + ( ) ( )‖ ( )‖ ‖ ( )‖ + ( )

= ( ) + ( ) ( )‖ ( )‖ , (4)

where = ( )‖ ( )‖ + ( ), (5)

denotes variable step-size of gradient descent and ‖∙‖ is the Euclidean norm operator and ‖ ‖ = ∑ | | .

Unfortunately, interpretation of the relation between

and ( ) is not correct in [7]. Here, we observe that when ( ) ≫ ‖ ( )‖ , then approaches to ; when ( ) ≈‖ ( )‖ , then approaches to 2⁄ ; when ( ) ≪ ‖ ( )‖ , then approaches to 0. Fig. 3 illustrates the intuitive relation between and ( ) . For the standard NLMF algorithm, consider three step-sizes : 0.2, 0.6 and 1.2 . As ( ) increases, according to the depicted curves in Fig. 3, it is easy to find that the stability of NLMF approaches NLMS which stability was proven in [8]. Please also note that when ( )

Fig. 2. A typical example of sparse multipath channel.

Fig. 3. Relations between and ( ).

Page 3: Adaptive Sparse Channel ation Re- Weighted Zero … · dable, accurat e receiver for e approaches). ... B. sta me cau uti LA con fun wh the up reases, the st ... lain adaptiv lgorithm

dec

A.

cha

can

whrowdismavecCSiso

hoLAgiv

whthegeoestspabet

congeoest

B.

stamecauutiLAconfun

whtheup

creases, the st

III. AD

Geometrical estimation

Consider a bannel estimati= [ , , …n be written as

here ∈w ; = [stribution atrix; = [ctor which is

S, the traininometry propert∈ (0,1), i.e., (1 − )lds for all

ASSO algorithven by = arhere denotee mean-squareometrical intimation is dearse channel tween ℓ -norm= 0, howevenstraint and ometrical figtimation using

ASCE using Recall that th

andard NLMFethod does notused by its oilize the sparsASSO algoritnstraint to thenction as

here denotee mean-fourthdate equation

tep size als

DAPTIVE SPAR

l interpretation

baseband systion. Assume ] , and s = [=

is a Toeplitz, … , , … ,(0, ) and, , … ,same as in

ng signal maty (RIP) [9] of∈ RIP( ,‖ ‖ ‖having no m

hm [10] base

rg lim 12 ‖ −es a regularize error (MSE) nterpretation epicted. Whenestimator canm sparse conser, there is no

solution plagure to expg ZA-NLMF a

ZA-NLMF alghe adaptive c

F algorithm int take advantaoriginal cost e constraint o

thm in (7), e cost functio

( ) = 14 (es a regularizh error (MFEof ZA-NLMF

so reduces to e

RSE CHANNEL

n of CS-based

tem model foa -length trthen the rece

, … , , … ,+ , z training sign] ∈ id denotes ] is an 1Eq. (1). Fromatrix satisf order with) if ‖ (1 +

more than nd sparse chan

− ‖ + ‖zation parameterm and sparof CS-basedn > 0 , as

n be obtained straint and solo convex poinane. One canplain adaptivalgorithm in [5

gorithm channel estimn Eq. (4), howage of the chanfunction in (2

or penalty funcwe introduce

on in (4) and

) + 2‖ ( )zation parameE) term and sF algorithm [5

ensure stability

ESTIMATION

d sparse chann

or CS-based sraining sequeneived signal m

] nal matrix wits an AWGNan id1 unknown ch

m the perspectsfies the resth positive para

)‖ ‖ , nonzero taps.nnel estimator

‖ ‖ , ter which balrsity of . In Fd sparse ch the figure sfrom convex

lution plane;nt between ℓ -n also extende sparse ch

5].

mation methodwever, the pronnel sparsity. I2) which doection. Accorde ℓ -norm s

obtain a new

)‖1, ter which balsparsity of ] is given by

y.

nenl

sparse nce as model

(6)

th first N with dentity hannel tive of tricted ameter

(7)

Then, r is

(8)

lances Fig. 4, hannel shows,

point When -norm d the hannel

d uses oposed It was es not

ding to sparse

w cost

(9)

lances . The

whegen

whe{0,1conesti

(coespa

Fi

Fico

( + 1) =ere =

nerated from

sgn ( )ere ( ) =1, … , }. It is

nstraint to aimation [11].

As shown in= 0) on solefficient. In oarsity efficient

ig. 4. Sparse chan

ig. 5. Sparse conoefficients by virt

( ) + (‖ and sgn(∙)

= ‖ ( )‖( )= [ℎ ( ), ℎ (s well knownapproximate

n Fig. 5, howlution plane other words, tly. If we int

nnel estimation wi

nstraint strength ctual of re-weighte

( ) ( )( )‖ + sgdenotes sign

= 1, ℎ (0, ℎ (−1, ℎ () … , ℎ (n that ZA-NL

the optimal

wever, ℓ -normis uniform ZA-NLMF c

troduce a re-

ith ℓ -norm spars

comparison betweed ℓ -norm zero-a

gn ( ) , function whi

( ) > 0( ) = 0( ) 0, )] and

LMF uses ℓ -l sparse cha

m sparse consfor each cha

cannot exploi-weighted ℓ -

se constraint.

een different chaattracting.

(10)

ich is

(11)

∈norm annel

traint annel it the norm

annel

Page 4: Adaptive Sparse Channel ation Re- Weighted Zero … · dable, accurat e receiver for e approaches). ... B. sta me cau uti LA con fun wh the up reases, the st ... lain adaptiv lgorithm

sparse constraint function to the cost function in (4), adopt different re-weighted factors, e.g., = 1,5,10 and 20, then we are able to penalize different sparse constraint strength on different channel coefficients. In other words, RZA can penalize stronger constraint strength than smaller channel coefficients, and vice versa. Based on this idea, an improved ASCE using RZA-NLMF algorithm is proposed in the following.

C. ASCE using RZA-NLMFalgorithm The ZA-NLMF cannot distinguish between zero taps and

non-zero taps since all the taps are forced to zero uniformly as show in Fig. 5. Unfortunately, ZA-NLMF based approach will degrade amount of estimation performance. Motivated by reweighted ℓ -minimization sparse recovery algorithm [6] in CS [12,13], we proposed an improved ASCE method using RZA-NLMF algorithm. The cost function of RZA-NLMF is given by ( ) = 14 ( ) + log(1 + |ℎ |/ ), (10)

where > 0 is a regularization parameter which trades off the estimation error and channel sparsity. The corresponding update equation is ( + 1) = ( ) + ( ) ( )‖ ( )‖ + sgn ( )1 + | ( )|, (11)

where = / is a parameter which depends on step-size , regularization parameter and threshold , respectively. In

the second term of (11), if magnitudes of ℎ ( ), =0,1, … , − 1 are smaller than 1⁄ , then these small coefficients will be replaced by zeros in high probability.

D. Equivalence between CS-based sparse channel estimation and ASCE

According to LASSO algorithm, we explained the connections between CS-based sparse channel estimation method and ASCE. Here, we further interpretation their equivalence between them. It will be useful to enrich sparse signal processing theory. Let take LASSO and ZA-NLMF for an example. If each row of training matrix in system model (6) is , then by virtual of matrix vectorization, then it can be written as [ , … , , . . , ] . The -th receive signal is obtained as = + ζ . Their corresponding functions of different vectors are listed in Tab. I.

TAB. I. EQUIVALENCE BETWEEN LASSO AND ZA-LNMF.

LASSO ZA-NLMF

Training signal ( )

Received signal ( )

Channel vector = [ , , … , ] = [ℎ , ℎ , … , ℎ ]Sparse constraint ‖ ‖ sign( )

Iterative times = mod( − 1, ) + 1

IV. COMPUTER SIMULATIONS In this section, the proposed ASCE method using RZA-

NLMF algorithm is evaluated. For achieving average

performance, 1000 independent Monte-Carlo runs are adopted. The length of channel vector is set as = 16 and its number of dominant taps is set to = 1 and 4, respectively. Each dominant channel tap follows random Gaussian distribution as (0, σ ) and their positions are randomly allocated within the length of which is subject to E{|| || =1} . The received signal-to-noise ratio (SNR) is defined as 10log ( ⁄ ), where = 1 is the unit transmission power. Here, we set the SNR as 3dB and 5dB in computer simulation. All of the step sizes and regularization parameters are listed in Tab. II. The estimation performance is evaluated by average mean square error (MSE) which is defined by Avergae MSE{ ( )} = E{‖ − ( )‖ }, (12)

where E{∙} denotes the expectation operator, and ( ) are the actual channel vector and its -th iterative adaptive channel estimator, respectively.

TAB. II. SIMULATION PARAMETERS.

Parameters Values

Channel length = 16

No. of nonzero coefficients = 1 and 4

Step-size: 1.0 and 1.5

Regularization parameter: and 1e − 5

Re-weighted factor: 5, 10, 20

Two experiments are considered in this section. In the first

experiment, ASCE methods are evaluated in SNR = 5dB . Regularization parameters, i.e., λ and λ , are set as λ =λ = 1 − 6. Fig. 6-8 shows that proposed method can achieve better estimation performance than ZA-NLMF in different step-sizes, i.e., = 0.5 , 1.0 and 1.5 . When the number of nonzero coefficients is = 1, bigger re-weighted factor can obtain better estimation and vice versa. Please note that both ZA-NLMF and RZA-NLMF algorithms apply ASCE

Fig. 6. Average MSE performance comparisons at SNR = 5dB and step-size = 0.5.

Page 5: Adaptive Sparse Channel ation Re- Weighted Zero … · dable, accurat e receiver for e approaches). ... B. sta me cau uti LA con fun wh the up reases, the st ... lain adaptiv lgorithm

which can achieve better estimation performance for sparser channel. As shown in three figures, when = 1, the depicted curves’ gap between NLMF and either ZA-NLMF or RZA-NLMF larger than the case with = 4 . It is worth mentioning that bigger reweighted factor obtains better estimation performance, e.g., RZA-NLMF using ε = 20 achieves better estimation performance than ε = 10 . It is mainly due to the bigger reweighted factor which can obtain variable sparse constraint for different magnitude of channel coefficients.

To verify the flexibility of the proposed RZA-NLMF

method at different SNR regimes, ASCE methods are evaluated in SNR = 10dB . Also, regularization parameters, i.e., λ and λ , are set to λ = λ = 5e − 8 to improve the

estimation performance. According to Fig. 9, RZA-NLMF works well and achieves better estimation performance than ZA-NLMF and NLMF.

V. CONCLSION In this paper, the disadvantage of conventional ASCE

method using ZA-NLMF algorithm was discussed from a geometrical perspective point of view. We found that ℓ -norm sparse constraint penalizes uniformly on different channel coefficients. Inspired by re-weighted ℓ -norm algorithm in CS, we proposed an improved ASCE using RZA-NLMF which penalizes smaller channel coefficients with stronger sparse constraint and vice versa. It was confirmed by computer simulation that the proposed ASCE using RZA-NLMF algorithm achieves better performance than either ZA-NLMF or NLMF approaches.

ACKNOWLEDGMENT This work was supported by the Japan Society for the

Promotion of Science (JSPS) postdoctoral fellowship and the National Natural Science Foundation of China under Grant 61261048.

REFERENCES

[1] F. Adachi, H. Tomeba, K. Takeda, and S. Members, “Introduction of frequency-domain signal processing to broadband single-carrier transmissions in a wireless channel,” IEICE Transactions on Communicationns, vol. E92-B, no. 9, pp. 2789–2808, Sept. 2009.

[2] E. Walach and B. Widrow, “The least mean fourth (LMF) adaptive algorithm and its family,” IEEE Transactions on Information Theory, vol. 30, no. 2, pp. 275–283, Mar. 1984.

[3] E. Eweda, “Global stabilization of the least mean fourth algorithm,” IEEE Transactions on Signal Processing, vol. 60, no. 3, pp. 1473–1477, Mar. 2012.

[4] G. Gui and F. Adachi, “Sparse LMF algorithm for adaptive channel estimation in low SNR region,” International Journal of Communication Systems, DOl:10.1002/dac.2531, pp. 1-11, Apr. 2013.

[5] G. Gui and F. Adachi, “Sparse least mean forth filter with zero-attracting L1-norm constraint,” Submitted for International Conference

Fig. 7. Average MSE performance comparisons at SNR = 5dB and step-size = 1.0.

Fig. 8. Average MSE performance comparisons at SNR = 5dB and step-size = 1.5.

Fig. 9. Average MSE performance comparisons at SNR = 10dB and step-size = 1.5.

Page 6: Adaptive Sparse Channel ation Re- Weighted Zero … · dable, accurat e receiver for e approaches). ... B. sta me cau uti LA con fun wh the up reases, the st ... lain adaptiv lgorithm

on Information, Communications and Signal Processing (ICICS), Tainan, Taiwan, Dec. 10-13, 2013.

[6] E. J. Candès, M. B. Wakin, and S. P. Boyd, “Enhancing sparsity by reweighted L1 minimization,” Journal of Fourier Analysis and Applications, vol. 14, no. 5–6, pp. 877–905, Oct. 2008.

[7] E. Eweda and N. J. Bershad, “Stochastic analysis of a stable normalized least mean fourth algorithm for adaptive noise canceling with a white Gaussian reference,” IEEE Transactions on Signal Processing, vol. 60, no. 12, pp. 6235–6244, Dec. 2012.

[8] B. Widrow and D. Stearns, Adaptive signal processing, New Jersey: Prentice Hall, 1985.

[9] E. J. Candes, “The restricted isometry property and its implications for compressed sensing,” Comptes Rendus Mathematique, vol. 1, no. 346, pp. 589–592, May 2008.

[10] R. Tibshirani, “Regression shrinkage and selection via the Lasso,” Journal of the Royal Statistical Society (B), vol. 58, no. 1, pp. 267–288, Jan. 1996.

[11] D. L. Donoho and Y. Tsaig, “Fast solution of L1-norm minimization problems when the solution may be sparse,” IEEE Transactions on Information Theory, vol. 54, no. 11, pp. 4789–4812, Nov. 2008.

[12] D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, Apr. 2006.

[13] E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information,” IEEE Transctions on Information Theory, vol. 52, no. 2, pp. 489–509, Feb. 2006.