For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed...

45
For Peer Review An Improved Compressed Sensing-Based Channel Estimation Algorithm with Near-optimal Pilot Placement Journal: IEEE Transactions on Communications Manuscript ID: TCOM-TPS-14-0756 Manuscript Type: Transactions Paper Submissions Date Submitted by the Author: 11-Sep-2014 Complete List of Authors: Li, Cheng; Memorial University, Faculty of Engineering Venkatesan, R.; Memorial University, Faculty of Engineering and Applied Sciences Dobre, Octavia; Memorial University of Newfoundland, Faculty of Engineering and Applied Science Zhang, Yi; Memorial University of Newfoundland, Faculty of Engineering and Applied Science Keyword: Communication systems, Estimation, Sparse matrices, Signal processing IEEE Transactions on Communications Under review for possible publication in

Transcript of For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed...

Page 1: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

An Improved Compressed Sensing-Based Channel

Estimation Algorithm with Near-optimal Pilot Placement

Journal: IEEE Transactions on Communications

Manuscript ID: TCOM-TPS-14-0756

Manuscript Type: Transactions Paper Submissions

Date Submitted by the Author: 11-Sep-2014

Complete List of Authors: Li, Cheng; Memorial University, Faculty of Engineering Venkatesan, R.; Memorial University, Faculty of Engineering and Applied Sciences Dobre, Octavia; Memorial University of Newfoundland, Faculty of Engineering and Applied Science Zhang, Yi; Memorial University of Newfoundland, Faculty of Engineering and Applied Science

Keyword: Communication systems, Estimation, Sparse matrices, Signal processing

IEEE Transactions on Communications

Under review for possible publication in

Page 2: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

IEEE TRANSACTIONS ON COMMUNICATIONS 1

An Improved Compressed Sensing-Based

Channel Estimation Algorithm with

Near-optimal Pilot Placement

Yi Zhang, Ramachandran Venkatesan, Octavia A. Dobre, and Cheng Li

Faculty of Engineering and Applied Science

Memorial University, NL, Canada A1B 3X5

Email: yz7384, venky, odobre, [email protected]

Abstract

This paper presents an improved recovery algorithm based onsparsity adaptive matching pursuit

(SAMP) with near-optimal pilot placement, for compressed sensing (CS)-based sparse channel estimation

in orthogonal frequency division multiplexing (OFDM) communication systems. Compared with other

state-of-the-art recovery algorithms, the proposed algorithm possesses the feature of SAMP of not

requiringa priori knowledge of the sparsity, and moreover, adjusts the step size adaptively to approach

the true sparsity. Furthermore, different pilot arrangements result in different measurement matrices in

CS and thus, can affect the estimation accuracy. It is known that by minimizing the mutual coherence

of the measurement matrix when the signal is sparse on the unitary discrete Fourier transform (DFT)

matrix, the optimal set of pilot locations is a cyclic difference set (CDS). Based on this, we propose

an efficient near-optimal pilot placement scheme in cases where CDS does not exist. Simulation results

show that the proposed channel estimation algorithm, with the new pilot placement scheme, offers a

better trade-off between the performance in terms of mean squared error (MSE) and bit error rate (BER)

and the complexity, when compared to other estimation algorithms.

This work is supported in part by the Natural Science and Engineering Research Council (NSERC) of Canada and Researchand Development Corporation Newfoundland and Labrador (RDC).

September 10, 2014 DRAFT

Page 1 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 3: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

2 IEEE TRANSACTIONS ON COMMUNICATIONS

Index Terms

Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching

pursuit, pilot placement, cyclic difference set.

I. INTRODUCTION

Orthogonal frequency division multiplexing (OFDM) has been widely adopted in various

wireless communication standards, such as worldwide interoperability for microwave access

(WiMAX), long term evolution (LTE) [1], and high definition television (HDTV) broadcasting

standards [2], due to its high data rate, efficient spectral utilization and ability to cope with mul-

tipath fading. Its application in underwater acoustic (UWA) communications has been exploited

in recent years [3], [4]. In coherent digital wireless systems, obtaining accurate estimates of the

channel state information (CSI) is critical at the receiver[5]. The data-aided channel estimation

in OFDM communication systems can be performed by either inserting pilot tones into certain

subcarriers of each OFDM symbol, or by using all subcarriersof OFDM symbols as pilots within

a specific period [6]. Conventional methods for CSI estimation, such as least square (LS) and

minimum mean-square (MMSE) [7], cannot exploit the sparsity of the wireless channels, and

they often lead to the excessive-utilization of the spectral and energy resources. Recently, studies

have suggested that many multipath channels tend to exhibita sparse structure in the sense that

the majority of channel impulse response (CIR) taps end up being either zero or below the noise

floor [8]. A few examples include: a) in North American HDTV broadcasting standard, there

are only a few significant echoes over a typical delay spread [9], [10]; b) UWA channels are

characterized by a few dominant echoes over larger time dispersion (in the order of hundreds

of milliseconds) [4], [11]; c) channels of broadband wireless systems in hilly environment also

exhibit the sparse CIR [12], [13]. As opposed to the traditional methods, channel estimation

exploiting the sparsity of the channels reduces the required number of pilots, and thus effectively

improves the spectral and energy efficiency [4], [8], [10], [14], [15].

More recently, advances in the new field of compressed sensing (CS) [16]–[18] have gained a

DRAFT September 10, 2014

Page 2 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 4: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 3

fast-growing interest in signal processing and applied mathematics [19], [20]. It has been shown

in the literature that CS can be applied to sparse channel estimation [4], [8], [15], [21]–[24]. Un-

like traditional channel estimation methods, CS allows accurate reconstruction of the signal which

is sparse on a certain basis, from a small number of random linear projections/measurements [18].

To ensure an accurate or even exact reconstruction of the target signal, a proper reconstruction

algorithm and a properly designed measurement matrix are essential.

Existing algorithms to recover a target sparse signal are generally grouped in two categories:

linear programming (LP) and dynamic programming (DP). The basis pursuit (BP) method in

LP achieves a good MSE performance; however, its high computational complexity makes it

less attractive to real large-scale applications. The orthogonal matching pursuit (OMP) [25]

algorithm, however, is the most popular algorithm in DP [15], [22]. An improved OMP variant,

referred to as the compressed sampling matching pursuit (CoSaMP) was proposed in [26], with

the MSE performance close to that of the BP algorithm and complexity lower than that of the

OMP algorithm. However, all these algorithms require the knowledge of the channel sparsity,

which is often not available in practical applications. Recently, the sparsity adaptive matching

pursuit (SAMP) algorithm was proposed to address this issue[27]. While CoSAMP and OMP

require the level of sparsity asa priori information to determine the number of iterations of the

algorithms, SAMP uses a stage-based iterative approach to estimate the sparsity, where a fixed

preset step size is used at each consecutive stage. The results showed that SAMP can outperform

the OMP algorithm and its variants; however, its MSE performance and complexity are affected

by the choice of the step size. Recently, a stage-wise algorithm which uses different step sizes

for different stages is proposed in [28]. However, it is not atruly adaptive algorithm, as the

change of step sizes depends on a specific relation between the number of measurements and

the sparsity level.

In this paper, we propose a novel CS-based reconstruction algorithm based on the SAMP

algorithm, referred to as the adaptive step size SAMP (AS-SAMP), which can adaptively adjust

the step size to achieve fast convergence. We provide simulation results for sparse channel

September 10, 2014 DRAFT

Page 3 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 5: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

4 IEEE TRANSACTIONS ON COMMUNICATIONS

estimation in UWA-OFDM systems to demonstrate the better MSE and BER performance

for channel estimation using the proposed algorithm. It is also worth noting that the good

performance is achieved without increasing the complexitysignificantly when compared with the

other mentioned recovery algorithms. Furthermore, because different pilot placement choices will

result in different CS measurements, the result will directly affect the performance of channel

estimation algorithms. Equally spaced pilots are in general optimal for conventional channel

estimation methods, which are, however, not true in CS-based methods [29]. In existing studies

related to CS, randomly and deterministically placed pilottones are mostly reported [29]–[35].

Although an exhaustive search of all possible combinationsof the pilot indices guarantees the

optimal pilot pattern, the computational complexity increases exponentially as the searching

space expands. Moreover, provided a partial DFT measurement matrix, it is known that if the

pilot indices set is a cyclic difference set (CDS), the mutual coherence of the measurement

matrix is minimized [32], [33], [36]. However, it is not guaranteed that a CDS will exist for

every pilot size. In this paper, we investigate the problem of pilot placement based on the CDS.

When the CDS does not exist, we propose a novel pilot pattern selection scheme which relies

on the concatenated CDS with an iterative tail search (C-CDSwith TS). Because the proposed

design is deterministic, it is more efficient than any other search-based methods. Simulation

results demonstrate that improvement in MSE and BER performance can be achieved using

the proposed pilot placement scheme, when compared to the randomly scattered pilots. The

following are the main contributions of this paper:

• We conduct a comparative analysis of the performance of existing reconstruction methods

in terms of estimation accuracy and computational complexity. Furthermore, we propose a

new reconstruction algorithm for CS applications wherea priori knowledge of sparsity is

not required.

• We propose an efficient scheme for near-optimal pilot placement to meet the requirement

of the measurement matrix for a satisfactory reconstruction.

The remainder of this paper is organized as follows. In Section II, a review of CS fundamentals

DRAFT September 10, 2014

Page 4 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 6: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 5

and the system model are presented. In Section III, an improved reconstruction algorithm

for sparse channel estimations is proposed and a comparisonwith the existing reconstruction

algorithms is performed. Section IV introduces a novel pilot placement scheme based on the

concatenated CDS, with an iterative tail search. Section V presents the simulation results and

performance evaluation. Section VI concludes the paper.

II. CS FUNDAMENTALS AND SYSTEM MODEL

The following notation will be used for the rest of the paper.A bold symbol represents a

set, a vector or a matrix, and a capital letter stands for frequency domain representation.XT

denotes the transpose ofX, X† denotes the Moore-Penrose pseudo-inverse matrix ofX, which

is defined as(XHX)−1XH , with XH as the Hermitian ofX, ‖ X ‖ and‖ X ‖1 denotes theℓ2

norm and theℓ1 norm ofX, respectively.

A. CS Fundamentals

We consider a contaminated measurement vector obtained through

y = Φx+w, (1)

wherex ∈ RN is a real-valued, one-dimensional, discrete-time signal vector,Φ is the sensing

matrix, andw ∈ RN is a stochastic error term with bounded energy‖ w ‖< ε [18]. Assuming

thatx can be expanded in an orthonormal basisΨ as

x = Ψα, (2)

whereα is theN × 1 coefficients vector, the signalx is K-sparse if and only ifK coefficients

(K ≪ N) in α are non-zero while the remaining coefficients are zero or negligibly small.

Substituting (2) in (1), one obtains

y = ΦΨα+w = Aα+w, (3)

whereA is referred to as themeasurement matrix. Essentially, CS states thatx can be re-

covered, with high probability, by solving the under-determined problem in (3). The reliability

of recovery depends on two constraints: 1)α is sparse; 2)A satisfies the restrict isometry

property (RIP) [16]–[18], which means that for an arbitrarylevel δ ∈ (0, 1) and any index set

September 10, 2014 DRAFT

Page 5 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 7: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

6 IEEE TRANSACTIONS ON COMMUNICATIONS

I ∈ 0, 1, ..., N − 1 such that|I| ≤ K, where| · | denote the cardinality of the set, and for all

α ∈ R|I|, the following relation holds:

(1− δ) ‖ α ‖2≤‖ AIα ‖2≤ (1 + δ) ‖ α ‖2, (4)

where AI is the matrix containing the columns of which the indices areelements of the

set I. An estimator ofα in (3) can be achieved by solving a convex optimization problem,

which is formulated as [17]

α = argmin ‖ α ‖1, subject to‖ y −Aα ‖≤ ε, (5)

for a givenε > 0. If A satisfies RIP andα is sufficiently sparse, the norm of the reconstruction

error is bounded by‖ α−α ‖≤ Cε, whereC depends on the RIP related parameterδ of A rather

thanα [16], [17]. Particularly, if a measurement matrix is composed of random rows in anN×N

DFT matrix, and ifM > CδK logN , aK-sparse signal can be reconstructed with probability of at

least1−O(N−δ), whereδ is the constant in the RIP, andCδ is approximately linear withδ [16].

B. System Model

We consider anN-subcarrier OFDM system in whichP subcarriers are used as pilots. The

symbols transmitted on thekth subcarrier,X(k), 0 ≤ k ≤ N−1, are assumed to be independent

and identically distributed random variables drawn from a phase-shift keying (PSK) or quadrature

amplitude modulation (QAM) signal constellation. Assume that the time-invariant multipath

channel having the impulse response

h(n) =

Np−1∑

p=0

ηpδ(n− τp), (6)

whereNp is the number of paths, andηp andτp are the amplitude gain and the delay associated

with the pth path, respectively. The vector of received signal after discrete Fourier transform

(DFT) is expressed as

Y = XH +W = XDh+W, (7)

whereX is an N × N diagonal matrix with the elementsX(k), 0 ≤ k ≤ N − 1, on the

main diagonal,Y = [Y (0), Y (1), ..., Y (N − 1)]T . H = [H(0), H(1), ..., H(N − 1)]T andW =

DRAFT September 10, 2014

Page 6 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 8: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 7

[W (0),W (1), ...,W (N − 1)]T are the frequency response vector of the channel and additive

white Gaussian noise (AWGN), respectively.h = [h(0), h(1), ..., h(L−1)]T . The (m,n) element

of D is given by [D]m,n = 1√Ne−j2πmn/N , where0 ≤ m ≤ N − 1 and 0 ≤ n ≤ L − 1. After

extracting the pilot subcarriers, we can write the following input-output relationship

Yp = XpDph+Wp = Ah+Wp, (8)

whereYp = SY, Xp = SXST , Dp = SD, Wp = SW, andS is aP ×N matrix for selected

pilot subcarriers. In addition,A = XpDp is a P × L matrix, referred to as the measurement

matrix. The goal of CS-based channel estimation is to estimate h from the received pilotYp,

given the measurement matrixA.

III. PROPOSEDAS-SAMP WITH APPLICATION TO SPARSECHANNEL ESTIMATION

A. Comparison of Reconstruction Algorithms in the Literature

The reconstruction algorithms considered in this paper areorthogonal matching pursuit (OMP),

compressed sampling matching pursuit (CoSaMP), and sparsity adaptive matching pursuit (SAMP).

Fig. 1 depicts the corresponding flow charts, whereCt, Ft, rt, andK denote the candidate

support set, the final support set, the residual vector in thetth iteration, and the sparsity of the

target signal, respectively. As seen from Fig. 1, in the OMP algorithm,Ft is expanded by adding

coordinates successively and it uses only one maximum correlation test to add one coordinate to

Ft. On the other hand, the CoSaMP algorithm refines a fixed-sizeFt by selecting coordinates

from a set of candidatesCt. It uses apreliminary correlation testand afinal correlation test,

which are simplified topreliminary testandfinal test, to add one or more coordinates toFt. The

final test removes the wrong coordinates added in thepreliminary test, which is referred to as

backtracking, and therefore improves the accuracy of the estimation [26]. However, most natural

signals are compressible rather than strictly sparse. The sparsityK for these signals could not

be well-defined. It is shown that the reconstruction accuracy can be significantly degraded as we

eitherunderestimateor overestimateK [27]. Unlike the other algorithms, SAMP does not require

a priori knowledge ofK. It adopts a stage-wise approach to identifyFt through the backtracking

September 10, 2014 DRAFT

Page 7 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 9: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

8 IEEE TRANSACTIONS ON COMMUNICATIONS

Initialization

Maximum

Correlation Test

Computing rt

Merging Final

Support Set

Exit

Yes

Current iteration

< K ?

No

Initialization

Preliminary Maximum

Correlation Test

Exit

Yes

Final Test

Ct( fixed size )

Ft( fixed size )

Halting condition?

No

Initialization

Preliminary Maximum

Correlation Test

Halting condition?

No

Exit

Yes

Final Test

Ft( adaptive size )

Updating Ftand r

t

Yes

No

Updating

Stage, Ft

and rt

Ft-1

Computing Signal

Estimation

Merging Final

Support Set

Ft-1

Computing Signal

Estimation and rt

Yes

Ft-1: Final support set from the previous iteration

Ft: Final support set at the current iteration

rt-1: Residual from the previous iteration

rt: Residual at the current iteration

Ct: Candidate support set at the current iteration

Computing Signal

Estimation and rt

Ft-1

Merging Final

Support Set

Ct( adaptive size )

OMP CoSaMP SAMP

<t t -1r r ?

Fig. 1. Flow charts of the OMP, CoSaMP, and SAMP algorithms.

strategy. The size ofFt stays the same among iterations in each stage, however, whenit moves to

the next stage, the size ofFt is increased by a fixed step sizes to search for more coordinates

of the recovered signal which correspond to the least residual. This process continues until

the residual of the recovered signal falls below a predetermined threshold. Although SAMP

guarantees exact recovery after a finite number of iterations (see proof in [27]), it leaves an

open question about the choice of the step sizes to achieve the trade-off between accuracy of

estimation and complexity. This motivates us to address theproblem of adaptively adjusting the

step size between consecutive stages. Recently, a variablestep size algorithm has been proposed

in [28]. However, the increment of the step size is based on a particular relationship between

the number of the measurements and the sparsity, which is notalways valid in applications.

Therefore, we propose a novel AS-SAMP algorithm which can adaptively adjust the step size;

this will be presented subsequently.

DRAFT September 10, 2014

Page 8 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 10: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 9

B. Proposed AS-SAMP Algorithm

Since a smaller step sizes in the SAMP algorithm leads to a better estimation accuracy

while the complexity increases, and a largers degrades the accuracy of the estimation while

the complexity decreases, an adaptively adjusteds may lead to a better trade-off between the

accuracy of estimation and the complexity of the algorithm.Specifically, an adaptively adjusted

s means that the change ofs depends on how far the current reconstruction state, e.g., the current

reconstructed signal energy, or the estimated sparsity of the current reconstructed signal, is from

Algorithm 1 AS-SAMPInput: Received signal at pilot subcarriersYp, measurement matrixA, toleranceǫ, threshold

Γ, initial step sizesI ;1: Initialize h = [0, 0..., 0]T , hold = [0, 0..., 0]T , rtemp = [0, 0..., 0]T , indices setD0 = ∅,

candidate support setC0 = ∅, residualr0 = Yp, size of final support setL = s = sI , finalsupport setF0 = ∅, iteration indext = 1

2: while (‖rt−1‖ > ǫ) do3: Calculate signalSP = |AHrt−1|4: Select indices setDt in A corresponding to theL largest elements inSP Preliminary

test5: Merge chosen indices and final support set from previous iteration into candidate support

setCt = Dt ∪ Ft−1

6: Refine candidate set to final setFt by selecting indices corresponding to theL largestelements of|A†

CtYp| Final test7: Solve least-square problemh(Ft) = A

†FtYp

8: Calculate current residualrtemp = Yp −AFtA†FtYp

9: if (‖rtemp‖ < ǫ) then10: rt = rt−1

11: Break12: else if (‖rtemp‖ ≥ ‖rt−1‖) then13: if (‖ h(Ft) ‖ − ‖ hold ‖< Γ) then14: s = ⌈s/2⌉, L = L+ s, hold = h(Ft), rt = rt−1, t = t + 1 Fine tuning15: else16: L = L+ s, hold = h(Ft), rt = rt−1, t = t + 1 Fast approaching17: end if18: else19: Ft−1 = Ft, rt = rtemp, t = t+ 120: end if21: end while22: return h

Output: Estimation of baseband channel impulse responseh.

September 10, 2014 DRAFT

Page 9 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 11: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

10 IEEE TRANSACTIONS ON COMMUNICATIONS

the state of the true signal. Because the sparse elements with large values are reconstructed in

the initial stages of the algorithm, the energy difference of the reconstructed signal between

consecutive stages is reduced at a declining rate as the number of stages increases. In other

words, the energy of the reconstructed signal tends to be stable when the estimated sparsity is

close to the true sparsityK. Following this property, we propose the AS-SAMP algorithm; to

expedite the convergence, the algorithm begins with a larger step size (the initial step size is

denoted assI) which is adaptively decreased to provide fine tuning in later stages as the change

rate of the reconstructed signal’s energy decreases. Consequently, an additional thresholdΓ is

used to specify the beginning of the fine tuning. The pseudocode for the proposed algorithm is

presented asAlgorithm 1. The algorithm is also stage-wise with a variable size ofFt in different

stages. During a stage, it adopts two correlation tests iteratively, i.e., candidate and final tests, to

search a certain number of coordinates corresponding to thelargest correlation values between

the signal residual and the columns of the measurement matrix. Then, the algorithm moves to

the next stage until the recovered signal with the least residual is found. As opposed to SAMP,

the proposed algorithm incorporates two threshold values into the halting criterion: toleranceǫ

andΓ. Therefore, AS-SAMP halts when the residual’s norm is smaller thanǫ, in which ǫ is set to

be the noise energy. Meanwhile,s is decreased when the energy difference of the reconstructed

signal falls belowΓ, whose value is chosen based on empirical observations. Starting with a

sufficiently large initial step size (sI ≤ K), the algorithm quickly approaches the target signal.

However, when the difference in the energy of the reconstructed signals becomes smaller than

the presetΓ, the step size is reduced (by a factor of two) to avoidoverestimationof theK-sparse

target signal. Thisoverestimationcan significantly degrade the accuracy of the algorithm [27].

Theoretical guarantee of exact recovery, in both noiselessand noisy cases, of AS-SAMP are

provided with the corresponding proofs in Appendix A.

C. Computational Complexity

In this section, the computational complexity of the existing algorithms in the literature and

the proposed algorithm is compared in terms of the number of operations. Among the steps of

DRAFT September 10, 2014

Page 10 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 12: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 11

each algorithm, theMaximum Correlation Testdominates the contribution to the complexity by

the cost of the multiplicationAHrt; therefore, the number of operations in other steps are not

considered in this comparison. In addition, the complexityof the algorithms also depends on the

number of iterations. Since SAMP takes a finite number of iterations up to stage⌈K/s⌉, and

during each stage a portion of coordinates in the true support set are identified and refined via up

to K iterations, an upper bound of the number of iterations is⌈K/s⌉K. On the other hand, due to

the fast convergence of the AS-SAMP algorithm, fewer stagesare required to provide the same

quality of estimates, and as the computational complexity of each step is the same, the AS-SAMP

algorithm is less complex when compared with the SAMP algorithm. An upper bound of the

number of iterations of AS-SAMP is⌈K/s⌉K. Note that the upper bounds obtained for SAMP

and AS-SAMP are quite loose, as the number of iterations which varies from a stage to another

is likely to be equal to or smaller thans for most of the stages. Thus, we present an improved

upper-bounded number of iterations for the AS-SAMP algorithm in Lemma 2, Appendix A

and we show that the upper-bounded number of iterations is smaller than that for the SAMP

algorithm in Corollary 1, Appendix A. Moreover, according to [25], the number of iterations

which the OMP algorithm involves is at leastK, and therefore roughlyKPN operations are

needed. For CoSaMP, the number of operations is upper bounded by KPN [26]. Note that a

larger number of iterations are needed for OMP as only one coordinate is identified during each

iteration, and it lacks proof of theoretical guarantee of reconstruction quality, while CoSaMP,

SAMP exhibit low reconstruction complexity and offer theoretical guarantees of reconstruction

quality [26], [27]. A summary of the computational complexity of the considered algorithms is

provided in Table I.

TABLE ICOMPUTATIONAL COMPLEXITY.

Methods OMP CoSaMP SAMP‡ AS-SAMP‡

KPN ≤ KPN ≤ [−J log( |hmin|‖h‖ ) −1

log(CKs−SAMP)+ J ]PN ≤ [−J log( |hmin|

‖h‖ ) −1log(CKs−ASSAMP

)+ J ]PN

‡h is the target signal,hmin is the non-zero element with the minimum magnitude inh, andJ is the total number of the stages.CKs−SAMP

andCKs−ASSAMPare RIP-related parameters for SAMP and AS-SAMP, respectively (see Appendix A for a detailed explanation).

September 10, 2014 DRAFT

Page 11 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 13: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

12 IEEE TRANSACTIONS ON COMMUNICATIONS

IV. PROPOSEDPILOT PLACEMENT BASED ON CONCATENATED CDS

A. Problem Statement

According to the CS theory, an accurate recovery of a sparse signal relies on the RIP of the

measurement matrix. However, the RIP evaluation for a particular matrix is a non-deterministic

polynomial hard (NP) problem [18]. An alternative propertywhich evaluates if a measurement

matrix can well preserve the information of the sparse signal in the measurements is the mutual

coherence of the measurement matrix [16]–[18]. In (8), the measurement matrix is the product

of the transmitted pilots and the DFT submatrix and is determined by both the symbols on

the pilot subcarriers and the set of pilot location indices,which is also referred to as the

pilot placement/arrangement. In this section, we focus on the pilot placement by assuming

that the same symbol is transmitted on all the pilot subcarriers. The mutual coherence of a

P × L measurement matrixA is defined as the maximum absolute correlation between any

two normalized columns, which is

µ(A) = max1≤i,j≤L, i 6=j

| < aaai · aaaj > |‖aaai‖ · ‖aaaj‖

. (9)

Given the equal-powered pilots, and substitutingA with XpDp, (9) becomes

µ(A) = max1≤i,j≤L, i 6=j

|X(kc)|2| < dpi · dpj > |‖dpi‖ · ‖dpj‖

, (10)

where c = 1, 2, ...P and dpi denotes theith column of Dp with the mth element given

by 1√Ne−j2πikm/N , m = 1, 2, ..., P . We aim to find the set of pilot location indicesΩ =

k1, k2, ..., kP which minimizesµ(A). Several methods have been suggested to search the

suitable solutions iteratively [29], [32], [33]; however,the complexity of these methods is

potentially high because the searching space grows rapidly(even exponentially) as the numbers

of the total subcarriers and the pilot subcarriers increase. In the following section, we propose

a pilot placement scheme which aims to provide a near-optimal solution without suffering from

the fast-growing complexity.

DRAFT September 10, 2014

Page 12 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 14: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 13

B. Proposed Pilot Placement based on the Concatenated CDS

Suppose that the measurement matrixA is composed ofP rows of theN × L partial DFT

matrix D in (7) and the indices set of the selected rows isΩ, and all the pilot symbols are

equal-powered. According to [32], [33], [36], the pilot arrangements based on CDS are optimal

in minimizing the mutual coherence ofA (seeTheorem 3in Appendix C). However, the existence

of CDS is limited for some specific number of subcarriers and pilots. For the cases where there

is no CDS, a pilot placement scheme based on the concatenatedCDS is proposed. To begin

with, it is important to know that for an existing (P,N, λ) CDS of orderP ,1 every non-identity

element in the setG of orderN has exactly the same number of repetitions,λ, andG is the

set of cyclic differences of any two elements of the CDS [37].In other words, if we denote

the number of repetitions of the different elements ofG asλG = λg|g = 1, 2, ...N − 1, then

λ1 = λ2 = ...λN−1 = λ which also means that the variance ofλG is zero. Moreover, it is noticed

that a pilot pattern with a smaller variance ofλG is likely to give a smaller mutual coherence of

the resulting measurement matrix and thus, more accurate estimates. Consider an OFDM system

with N = 1024, in which P = 256 identical pilot symbols are randomly scattered, and the

number of taps of the sparse CIR isL = 250. In order to quantize the channel estimation error,

we adopt MSE, which is defined as

MSE = E[N∑

m=1

|H(m)− H(m)|2]. (11)

To show that as the variance ofλG increases, it is likely that so does the mutual coherence ofA

and the MSE of estimates, Spearman’s rank correlation is adopted to measure the strength of a

monotonic relationship (i.e., values of elements in a vector either increase or decrease with every

increase in an associated vector) between paired vectors [38]. Table II shows the Spearman’s

rank correlation between any pair of the following four vectors: the variance ofλG, the mutual

1An (P,N, λ) cyclic difference set is a subsetD = D(1), D(2), ..., D(P ) of the integers moduloN such that1, 2, ..., N−1 can each be represented as a difference(D(i)−D(j)), i, j = 1, 2, ...P moduloN in exactlyλ different ways. For example,the (3, 7, 1) CDS is1, 2, 4 modulo7.

September 10, 2014 DRAFT

Page 13 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 15: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

14 IEEE TRANSACTIONS ON COMMUNICATIONS

TABLE IISPEARMAN’ S RANK CORRELATIONS‡

var(λG)§ µ(A) The average MSE of OMP The average MSE of AS-SAMP

var(λG) 1 0.7475 0.7467 0.7527µ(A) 0.7475 1 0.7481 0.7534

The average MSE of OMP 0.7467 0.7481 1 —The average MSE of AS-SAMP 0.7527 0.7534 — 1

‡ For two vectors of size V,A = [A(1), A(2), ..., A(V )] andB = [B(1), B(2), ..., B(V )], Spearsman’s correlation is

calculated from∑V

i=1(ai−a)(bi−b)√

∑Vi=1

(ai−a)2∑V

i=1(bi−b)2

, whereai and bi are the positions in the ascending order (ranks) ofA(i)

andB(i), respectively.a and b are the means ofai andbi, i = 1, 2, ..., V .§ The variance ofλG = 1

N−1

∑N−1g=1 (λg)

2 − ( 1N−1

∑N−1g=1 λg)

2.

coherenceµ(A), and the MSE for both the OMP and AS-SAMP algorithms, obtained based on

104 pilot patterns;103 OFDM symbols and10 dB signal-to-noise ratio (SNR) were considered.

Spearman’s rank correlation can take values from1 to −1, with 1 (−1) indicating that two

vectors can be described using a monotonic increase (decrease) function, and0 meaning that

there is no tendency for one vector to either increase (decrease) when the other increases [38].

It is noted that by concatenating a CDS,2 the variance of the number of repetitions tends to be

small. From these observations, we propose a pilot placement scheme based on the concatenated

CDS for pairs of(P,N) where CDS does not exist. First, a CDS needs to be chosen for

concatenation according to the ratio of the number of the pilots to the number of subcarriers,

i.e., P/N . Specifically, we select the existing CDS with the parameters (u, v, a) in which u/v

is the closest toP/N . For instance, to select indices for 256 pilots from1024 positions, the

(133,33,8) CDS is used. After concatenating the selected CDS, we adopt an iterative procedure,

which is similar to that in [32], to find the rest of pilot positions which minimize the mutual

coherence of the resulting measurement matrix. We referredto this as to the iterative tail search;

the pseudocode for the proposed scheme is shown asAlgorithm 2. It is worth noting that the

size of the search space is greatly reduced after concatenation, and hence, the proposed method

converges much faster when compared to the iterative methods in [32], [33].

2A concatenated CDS is shown in the following example. For the(3, 7, 1) CDS, a concatenated CDS is obtained through1, 2, 4, (1× 7+ 1), (1× 7+ 2), (1× 7+ 4), (2× 7+ 1), (2× 7+ 2), (2× 7+ 4), ..., (i× 7+ 1), (i× 7+ 2), (i× 7+ 4), ...,wherei ∈ Z

+ and i ≥ 1.

DRAFT September 10, 2014

Page 14 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 16: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 15

Algorithm 2 Pilot Placement Based on Concatenated CDS with an IterativeTail SearchInput: An existing (u, v, a) CDS C for concatenation, the number of total subcarriersN , the

number of pilot subcarriersP , the partial DFT matrixD of which the (m,n) element is1√Ne−j2πmn/N , where0 ≤ m ≤ N − 1, 0 ≤ n ≤ L− 1, andL is the number of taps of the

CIR;1: Initialize Ω0

c = ∅, Ωtemp = ∅2: for i from 1 to ⌊N

v⌋ do

3: Ωic = Ωi−1

c

⋃[C+ (i− 1)× v]

4: end for5: Pr = P − u× ⌊N

v⌋, Ω = Ωc

6: for j from 1 to Pr do7: Ωtemp = Ω

8: Form allPr − j + 1 possible subsets of sizej by adding an element toΩtemp:Ω = Ωtemp

⋃k ∈ Pr + 1, Pr + 2, ..., N\Ωtemp

9: Form the matrixA by selecting rows ofD for eachj-element sets generated from theprevious step, and the indices set of the selected rows isΩ

10: For all (Pr − j + 1) of A matrices generated from the previous step, calculate thecorresponding mutual coherence, and choose the set with theminimum mutual coherence

11: end for12: return Ω

Output: The pilot indices setΩ

V. SIMULATION RESULTS

A. Simulation Set-up

As UWA channels are inherently sparse, we consider UWA channel estimation for a coded

OFDM transmission withN = 1024 subcarriers and bandwidth of9.8 kHz, leading to a subcarrier

spacing of9.5 Hz. The CP duration equals to26 ms, which corresponds to the length of CP

NCP = 256. Unless otherwise mentioned, the number of pilotsP = 256 is assumed. The data

symbols are drawn independently from a16-QAM constellation and are coded using a (1024, 512)

low-density parity-check (LDPC) code. We consider the channel model described in (6) with

Np = 15 multipaths, in which the inter-arrival times are exponentially distributed with a mean of

1 ms, i.e.,E[τj+1 − τj ] = 1 ms, j ∈ 0, 1, ..., Np − 1. The amplitudes are Rayleigh distributed

with the average power decreasing exponentially with delay, where the difference between the

beginning and the end of the CP is20 dB. These parameters are assumed to be constant within

an OFDM symbol. The parameters for the considered reconstruction algorithms are given in

September 10, 2014 DRAFT

Page 15 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 17: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

16 IEEE TRANSACTIONS ON COMMUNICATIONS

Table III. Since there is a trade-off between the initial step sizes and the reconstruction speed,

three choices of thes value (s ≤ Np) are used that correspond to a small, medium and large step

size. Also, as the thresholdΓ depends on the distribution of the magnitude of the target signal, we

set it equal to1 based on empirical results. The MSE and BER are used to measure the channel

estimation accuracy and the system performance, respectively. The CPU running time is used

to provide a rough estimation of the channel estimation computational complexity. Simulations

are performed in MATLAB R2009a using a2.67 GHz Intel Corei7 CPU with 4 GB of memory

storage and104 Monte-Carlo trials are employed for averaging the results.Performance of the

proposed reconstruction algorithm and the pilot placementscheme are shown next.

TABLE IIIPARAMETERS OF COMPARED ALGORITHMS.

Name MaxIter‡ Sparsity K Step size s Tolerance ǫ Threshold Γ

OMP 20 15 not required not required not requiredCoSaMP 20 15 not required norm(Noise)§ not requiredSAMP not required not required 1, 6, 8 norm(Noise) not requiredAS-SAMP not required not required initially 1, 6, 8 norm(Noise) 1‡ Maximum iterations.§ norm(V)=

∑ |V|2.

B. Performance of the Proposed AS-SAMP Algorithm

First, we compare the proposed algorithm with two classic algorithms, namely least square

(LS) and OMP using different numbers of randomly distributed pilots. Fig. 2 shows the MSE

of these algorithms versus SNR. As the number of pilots increases, MSE decreases for all

algorithms. It is worth noting that OMP has, in general, a better MSE performance than LS

for the same number of pilots. Similarly, AS-SAMP achieves abetter MSE performance than

the OMP algorithm. For example, at SNR = 15 dB andP = 64, the MSE for LS, OMP and

AS-SAMP algorithms are8 × 10−2, 2 × 10−2 and 6 × 10−3, respectively. In other words, for

the same level of MSE performance, the proposed algorithm uses fewer pilots than the other

two algorithms.

DRAFT September 10, 2014

Page 16 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 18: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 17

0 2 4 6 8 10 12 14 16 1810

−4

10−3

10−2

10−1

100

SNR (dB)

MSE

LS (P =64)

LS (P =128)

LS (P =256)

OMP (P =64)

OMP (P =128)

OMP (P =256)

AS-SAMP (P =64)

AS-SAMP (P =128)

AS-SAMP (P = 256)

Fig. 2. MSE performance of the LS, OMP and AS-SAMPalgorithms with various number of pilots.

0 2 4 6 8 10 12 14 16 1810

−4

10−3

10−2

10−1

100

SNR (dB)

MSE

LS

OMP

CoSaMP

SAMP (s = 6)

AS−SAMP (sI = 6)

Fig. 3. MSE performance of the LS, OMP, CoSaMP,SAMP, and AS-SAMP algorithms.

Next, with a fixed number of pilots, Figs. 3, 4 and 5 plot the MSE, BER and the computational

complexity for all the algorithms, respectively. According to results in Figs. 3 and 4, the CS-based

channel estimators give better MSE and BER performance thanthe conventional LS estimator. In

other words, the channel estimators based on the SAMP and AS-SAMP algorithms outperform

those based on the OMP and CoSaMP algorithms in the sense thatthe former algorithms offer

the same performance while using less number of pilots. As shown in Fig. 5, the complexity

of the OMP algorithm is higher than that of other algorithms;this can be easily explained, as

only one coordinate is added during each iteration. Similarly, the complexity of the AS-SAMP

algorithm is higher than that of the SAMP which can be explained, as the step size is reduced

during the fine tuning stages given the same initial step sizein AS-SAMP.

Figs. 6 and 7 depict the MSE performance and computational complexity of the AS-SAMP

and SAMP algorithms with different step sizes, respectively. As can be seen, for a medium or

large initial step size (s = sI = 6 or s = sI = 8), AS-SAMP outperforms SAMP with a small

increase in complexity, while fors = sI = 1, the same performance is achieved using a slightly

larger CPU running time for AS-SAMP. This can be easily explained, as whens = sI = 1,

AS-SAMP becomes equivalent to SAMP except an additional criteria for changing stages. Note

September 10, 2014 DRAFT

Page 17 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 19: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

18 IEEE TRANSACTIONS ON COMMUNICATIONS

0 2 4 6 8 10 12 14 16 18

10−6

10−5

10−4

10−3

10−2

10−1

100

SNR (dB)

BER

Coded LS

Coded OMP

Coded CoSaMP

Coded SAMP (s = 6)

Coded AS−SAMP (sI = 6)

Fig. 4. BER performance of the LS, OMP, CoSaMP,SAMP, and AS-SAMP algorithms.

0 2 4 6 8 10 12 14 16 180

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0.1

SNR (dB)

Runningtime(sec)

OMP

CoSaMP

SAMP (s =6)

AS−SAMP (sI =6)

Fig. 5. Running time of the OMP, CoSaMP, SAMP, andAS-SAMP algorithms.

0 2 4 6 8 10 12 14 16 1810

−4

10−3

10−2

10−1

SNR (dB)

MSE

SAMP (s = 8)

SAMP (s = 6)

SAMP (s = 1)

AS−SAMP (sI = 8)

AS−SAMP (sI =6)

AS−SAMP (sI = 1)

Fig. 6. MSE performance of the SAMP and AS-SAMPalgorithms with different step sizes.

0 2 4 6 8 10 12 14 16 180.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18

SNR (dB)

Runningtime(sec)

SAMP (s =1)

SAMP (s =6)

SAMP (s=8)

ASSAMP (sI=1)

AS−SAMP (sI=6)

AS−SAMP (sI =8)

Fig. 7. Running time of the SAMP and AS-SAMPalgorithms with different step sizes.

that SAMP and AS-SAMP requires ≤ Np andsI ≤ Np, respectively, to avoid overestimation. In

general, the proposed algorithm is more accurate without significantly increasing the complexity

of the estimation.

C. Performance of the Proposed Pilot Placement Scheme

Here, we consider three pilot placement schemes, i.e., random, the procedure in [32], and our

proposed scheme. Equal-powered pilots are assumed for all scenarios. With the random scheme,

the pilots are selected randomly among all subcarriers and104 trials are generated for averaging

DRAFT September 10, 2014

Page 18 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 20: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 19

the results. With our proposed scheme, the pilots are arranged based on the concatenated CDS

with an iterative tail search, referred to as C-CDS with TS. Because the selection of the existing

CDS depends on the ratioP/N , the (273,17,1), (73,9,1) and (133,33,8) CDS are chosen forthe

cases ofP = 64, P = 128 andP = 256, respectively.

0 2 4 6 8 10 12 14 16 1810

−4

10−3

10−2

10−1

100

SNR (dB)

MSE

Random, µ = 0.31

C-CDS with TS, µ = 0.27

Random, µ = 0.31

C-CDS with TS, µ = 0.27

OMP

AS-SAMP

Fig. 8. MSE performance of the OMP and AS-SAMPalgorithms with random and the proposed pilot placement,for P = 64.

0 2 4 6 8 10 12 14 16 1810

−4

10−3

10−2

10−1

100

SNR (dB)

MSE

Random, P = 128

Procedure in [32], P = 128

C-CDS with TS, P = 128

Random, P = 256

Procedure in [32], P = 256

C-CDS with TS, P = 256

OMP

AS-SAMP

Fig. 9. MSE performance of the OMP and AS-SAMPalgorithms for different pilot placements, forP = 128,256. Solid lines are used for OMP and dashed lines forAS-SAMP.

Fig. 8 shows the MSE performance of the OMP and AS-SAMP algorithms with the random

and the proposed pilot placement scheme, givenP = 64. For the randomly placed pilots, we use

error bars to indicate the standard deviations of the MSE; this was calculated based on104 indices

sets. It can be seen that the proposed method provides a superior channel estimation performance

when compared to the random placements, as a reduced mutual coherenceµ is obtained. For

instance, at SNR = 16 dB, the average MSEs of the random schemefor OMP and AS-SAMP

are 1.6 × 10−2 and 3× 10−3 , respectively, while the MSEs of the proposed scheme for OMP

and AS-SAMP are1.2× 10−2 and2× 10−3, respectively. It should be noted that for each SNR,

the MSE with the proposed method is smaller than the mean of the MSEs minus the standard

deviation with the randomly placed pilots. More specifically, it is equal to approximately the

mean minus three times the standard deviation; as such, the proposed method provides a better

MSE performance than nearly all the random pilot arrangements. Also, AS-SAMP achieves a

September 10, 2014 DRAFT

Page 19 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 21: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

20 IEEE TRANSACTIONS ON COMMUNICATIONS

0 2 4 6 8 10 12 14 16 1810

−8

10−7

10−6

10−5

10−4

10−3

10−2

10−1

100

SNR (dB)

BER

RandomProcedure in [32]

C-CDS with TS

AS-SAMP

OMP

Fig. 10. BER performance of the OMP and AS-SAMPalgorithms for different pilot placements, forP = 256.Solid lines are used for OMP and dashed lines for AS-SAMP.

0 2 4 6 8 10 12 14 16 1810

−7

10−6

10−5

10−4

10−3

10−2

10−1

100

SNR (dB)

BER

Random

C-CDS with TS

SAMP

OMP

CoSaMP

AS-SAMP

Fig. 11. BER performance of the OMP, CoSaMP, SAMP,and AS-SAMP algorithms for random and the proposedpilot placement, forP = 256. Solid lines are used forOMP, dot lines for CoSaMP, dashed-dot lines for SAMP,and dashed lines for AS-SAMP.

better MSE performance when compared with OMP for both pilotplacement schemes.

Fig. 9 shows the MSE of the OMP and AS-SAMP algorithms for the three previously

mentioned pilot placement schemes forP = 128, 256. Among them, the proposed C-CDS

with TS has a slightly better performance. Moreover, since the C-CDS pilot arrangement is

deterministic, and the iterative search is only conducted for the tail, the searching space is greatly

reduced. Therefore, the number of iterations of the proposed method, which is proportional

to its computational complexity, is significantly lower than that of the procedure in [32]. An

example is provided as follows. WhenN = 1024 andP = 256, the procedure in [32] requires

(2N − P + 1)P/2 = 229, 504 iterations. For our proposed scheme, assuming that a (133,33,8)

CDS is used for concatenation, there are1024−133×⌊1024133

⌋ = 93 subcarriers at the tail. To search

the rest of25 (256−33×⌊1024133

⌋) pilot indices which minimizesµ, (2×93−25+1)×25/2 = 2025

iterations are required. It is also worth noting that AS-SAMP provides a comparable or even a

reduced MSE using fewer pilots than OMP for a given SNR. For example, at SNR = 16 dB,

the MSE of AS-SAMP forP = 128 is lower than that of OMP forP = 256 for all considered

pilot placement schemes.

DRAFT September 10, 2014

Page 20 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 22: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 21

Finally, we compare the BER performance for different pilotplacement schemes, with results

shown in Figs. 10 and 11. In Fig. 10, the OMP and AS-SAMP algorithms are considered. Clearly,

AS-SAMP provides a better BER performance, and the proposedC-CDS with TS is slightly

better than the other pilot schemes; this is consistent withthe MSE comparison in Fig. 9. Fig. 11

compares the BER performance of the OMP, CoSaMP, SAMP and AS-SAMP algorithms with

random and the proposed pilot arrangements. In general, AS-SAMP with the proposed pilot

allocation scheme provides the best BER performance among all the estimation algorithms with

the considered pilot placement schemes.

VI. CONCLUSION

In this paper, we have proposed an adaptive step size SAMP algorithm, AS-SAMP, with an

efficient near-optimal pilot placement scheme for sparse channel estimation in OFDM systems.

The proposed reconstruction algorithm features an adaptive step size adjustment strategy and

possesses the advantage of not requiringa priori knowledge of the sparsity of the channel. It is

shown through performance analysis that the proposed algorithm can significantly improve the

estimation accuracy without introducing significant additional complexity. In order to ensure a

satisfactory estimation, we have further proposed a near-optimal pilot placement scheme, which

is based on the concatenated CDS with an iterative tail search. Because the searching space of

the proposed method is significantly reduced, its complexity is much lower than the iterative

procedures in the literature. Monte Carlo simulations showthat the proposed AS-SAMP with

the new pilot placement scheme provides a better MSE performance for the channel estimate,

as well as the system BER, without significantly increasing the computational complexity, and

thus offers a better tradeoff between complexity and performance.

APPENDIX A

RECONSTRUCTIONPERFORMANCE OFAS-SAMP

The recovery performance of the proposed algorithm, AS-SAMP, is based on the theoretical

performance of SAMP and subspace pursuit (SP) [39]; therefore, the proofs which follow the

September 10, 2014 DRAFT

Page 21 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 23: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

22 IEEE TRANSACTIONS ON COMMUNICATIONS

format in [27], [39] are developed for two cases: exact recovery from noiseless measurements

and approximate recovery from noisy measurements. Before stating Theorem 1for the exact

recovery of the AS-SAMP algorithm, we need two results summarized in the lemmas below.

Lemma 1:Given an arbitraryK-sparse signalh and the corresponding measurementYp =

Ah. Let the total number of stages decided by AS-SAMP beJ and si, i ∈ 1, 2, ..., J be

the step size of theith stage. IfA satisfies the RIP with parameterδ3KJ< 0.06 [17], where

KJ =∑J

i=1 si is the estimated sparsity level, the last stage of AS-SAMP isequivalent to SAMP

algorithm with estimated sparsityKJ , except possibly different contents in the final support set

and the observation residual vector.

Proof: During the last stage of AS-SAMP, the final support set has sizeKJ . Given the same

size of the final support set, both algorithms use the same preliminary and final correlation test,

which returns theKJ indices corresponding to the largest absolute values of|A†CtYp|. The only

differences are in the content of the final support set and theobservation residual vector.

Lemma 2:AS-SAMP guarantees the convergence of the recovery process. The upper-bounded

number of iterations that AS-SAMP involves is

− log(|hmin|‖ h ‖ )(

−1

log(CK1)+

−1

log(CK2)+ ...+

−1

log(CKJ)) + J, (12)

where hmin is the non-zero element with the minimum magnitude andCKi=

2δ3Ki(1+δ3Ki

)

(1−δ3Ki)3

,

i = 1, 2, ..., J , δ3Kiis the RIP parameter in theith stage, andKi is the size of final support set

in the ith stage.

Proof: Lemma 2.1is introduced which serves as a foundation for the proof ofLemma 2.

Lemma 2.1:The energy difference between the signal captured by the final support set from

the current iteration and the final support set from the previous iteration, i.e.,‖ hFt ‖2 −

‖ hFt−1 ‖2, decreases as the number of iterations increases before theestimated sparsity reaches

the true sparsity.

Proof of Lemma 2.1is postponed to Appendix B. Similar to SAMP, AS-SAMP takes a finite

number of iterations to approach the sparse estimation. If the algorithm falls into an infinite loop

DRAFT September 10, 2014

Page 22 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 24: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 23

of a certain stage, the final support set will repeat and this is in contradiction to the fact that the

energy difference decreases monotonically. Intuitively,AS-SAMP reaches the final estimation

with the same estimated sparsity level faster than SAMP because the most significant entries are

reconstructed by selecting a larger number of coordinates into the support set during the initial

stages. Let the number of iterations required in theith stage using the proposed algorithm be

niti , i = 1, 2, ..., J . According toTheorem 6in [39], for each iteration in a particular stage both

SAMP and AS-SAMP contain twocorrelation maximization testsand the property below holds.

nit1 ≤

− log( |hmin|‖h‖ )

− log(CK1)+ 1,

nit2 ≤

− log( |hmin|‖h‖ )

− log(CK2)+ 1, ...,

nitJ ≤

− log( |hmin|‖h‖ )

− log(CKJ)+ 1.

(13)

Let the total number of iterations required bentotal, then

ntotal = nit1 + nit

2 + ...nitJ ,

≤ − log(|hmin|‖ h ‖ )(

−1

log(CK1)+

−1

log(CK2)+ ...+

−1

log(CKJ)) + J.

(14)

Moreover, the upper-bounded number of iterations for AS-SAMP is compared with that for

AS-SAMP in the Corollary below.

Corollary 1: Provided thatA satisfies the RIP with parameterδ3Ks−ASSAMP< 0.06 and

δ3Ks−SAMP< 0.06, whereKs−ASSAMP andKs−SAMP are the estimated sparsity level for AS-

SAMP and SAMP, respectively, the upper-bounded number of iterations for AS-SAMP is smaller

than that for SAMP.

The proof of theCorollary 1 is deferred to in Appendix B. Now based on the lemmas above, a

sufficient condition for exact reconstruction is drawn in the following theorem.

Theorem 1:(Exact recovery from noiseless measurements): Let Ks−ASSAMP = sIJ , where

sI = s1 is the initial step size andJ is the total number of stages of the AS-SAMP algorithm. If

the sensing matrixA satisfies the RIP with the parameterδ3Ks−ASSAMP< 0.06, the AS-SAMP

September 10, 2014 DRAFT

Page 23 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 25: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

24 IEEE TRANSACTIONS ON COMMUNICATIONS

algorithm is guaranteed to exactly recoverh from Yp via a finite number of iterations.

Proof: Based onLemma 1andLemma 2, when the RIP condition is satisfied, because the

last stage is equivalent to SAMP with estimated sparsityKs−ASSAMP , the proposed algorithm

guarantees exact recovery the target signal after this stage, and it takes finite number of iterations

to reachKs−ASSAMP .

Remark: From theLemma 1, a sufficient condition which is required forA to guarantee an

exact recovery isδKJ< 0.06, whereKJ =

∑Ji=1 si and si is the step size in theith stage. As

sI ≥ s2 ≥ · · · ≥ sJ , we haveKJ ≤ sIJ . Therefore, a more restrictive requirement of the RIP

parameter ofA will be δ3sIJ < 0.06 which is δ3Ks−ASSAMP< 0.06. The sufficient condition for

SAMP is more restrictive than SP algorithm as the estimated sparsity levelKs−SAMP = s⌈K/s⌉

wheres is the fixed step size in SAMP, is always larger than the true sparsityK [27]. Similarly,

to compare the restrictiveness of the condition of the AS-SAMP, the values ofKs−SAMP ,

Ks−ASSAMP and K needs to be compared. As⌈K/J⌉ ≤ sI ≤ s⌈K/s⌉J

, so K ≤ sIJ ≤

s⌈K/s⌉ and thusK ≤ Ks−ASSAMP ≤ Ks−SAMP . Furthermore, because of monotonicity of

δ3K , δ3Ks−ASSAMP≤ δ3Ks−SAMP

, as a result, ifδ3Ks−SAMP< 0.06, δ3Ks−ASSAMP

< 0.06 is hold,

which means that the requirement ofA for AS-SAMP is less restrictive than that for SAMP.

Moreover, asA is a P ×N partial DFT matrix in our application, and indices of theP pilots

are randomly chosen,A satisfies the RIP with an overwhelming probability providedthat

K ≤ C1P

(logN)6, (15)

whereC1 depends only on the RIP parameter (by overwhelming probability, it means that the

probability is at least1 − N− 1

C1 ) [16] and K is the sparsity of the target CIR. In fact, (15)

expresses the minimum number of pilots (P ≥ K(logN)6

C1) required such that a random subset

of A with average cardinality3Ks−ASSAMP satisfies the RIP with high probability. Specifically,

for P ≥ 8K, the recovery rate is above 90% [16].

The second part is to investigate the approximate recovery from inaccurate measurements of

the proposed algorithm. Two types of inaccurate measurements are considered: one is subject

DRAFT September 10, 2014

Page 24 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 26: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 25

to noise perturbation and the other one is subject to approximately sparse signal whose non-

significant elements are comparatively small (but not zero)and noise.

Theorem 2:(Approximate recovery from noisy measurements): Considerh ∈ RN as aK-

sparse signal,Yp = Ah+Wp ∈ RP as the noisy measurement vectors andWp as a noise vector

generated from a Gaussian distribution with zero mean and varianceσ2. If the measurement ma-

trix A satisfies the RIP with parameterδ3Ks−ASSAMP< 0.03, the signal approximationh satisfies:

‖ h− h ‖≤ 1 + δ3Ks−ASSAMP

δ3Ks−ASSAMP(1− δ3Ks−ASSAMP

)‖ Wp ‖ =

1 + δ3Ks−ASSAMP

δ3Ks−ASSAMP(1− δ3Ks−ASSAMP

)σ (16)

Corollary 2: (Approximate recovery from signal and noise perturbations): Considerh ∈ RN

as a compressibleK-sparse signal. LethK represent theK most significant entries. The signal

h is compressibly sparse ifh−hK 6= 0. With the same assumption ofTheorem 2, if A satisfies

the RIP with parameterδ6Ks−ASSAMP< 0.03, the reconstruction distortion of the AS-SAMP

algorithm is written as follows:

‖ h− h ‖≤ 1 + δ6Ks−ASSAMP

δ6Ks−ASSAMP(1− δ6Ks−ASSAMP

)(σ +

1 + δ6Ks−ASSAMP

K|h− hK |) (17)

The proofs of theTheorem 2and Corollary 2 are similar to the corresponding theorem and

corollary in [39] as the AS-SAMP is equivalent to SAMP with the estimated sparsityKs−ASSAMP

at the last stage except for the different contents of candidate and final support set which does

not affect stability of AS-SAMP under both signal and noise perturbations.

APPENDIX B

PROOF OFLEMMA 2.1 AND COROLLARY 1

A. Proof of Lemma 2.1

The proof is derived fromTheorem 2in [39] because both the preliminary and final test are

correlation maximization tests.

Proof: Provided that the sensing matrixA satisfies the RIP with parameterδ3KJ< 0.06.

‖ hF

t ‖2≤ C2Ki

‖ hF

t−1 ‖2≤ C2KJ

‖ hF

t−1 ‖2= ζ ‖ hF

t−1 ‖2, (18)

wherehF

t is the reconstructed signal not captured byFt after thetth iteration. (18) is based on

0 < CK1≤ CK2

≤ ... ≤ CKJ< 1, and thereforeζ = C2

KJ< 1. Thus, the following derivation

September 10, 2014 DRAFT

Page 25 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 27: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

26 IEEE TRANSACTIONS ON COMMUNICATIONS

holds,

‖ hF

1 ‖2 −ζ ‖ h ‖2≤ 0 ≤‖ hF

2 ‖2,

0 ≤‖ hF

1 ‖2 − ‖ hF

2 ‖2≤ ζ ‖ h ‖2,(19)

where h = hF

0 . As ‖ hF

1 ‖2=‖ h ‖2 − ‖ hF1 ‖2 and ‖ hF

2 ‖2=‖ h ‖2 − ‖ hF2 ‖2,

(19) can be written as:

0 ≤ (‖ h ‖2 − ‖ hF1 ‖2)− (‖ h ‖2 − ‖ hF2 ‖2) ≤ ζ ‖ h ‖2,

0 ≤‖ hF2 ‖2− ‖ hF1 ‖2≤ ζ ‖ h ‖2 .(20)

Similarly, we have

0 ≤‖ hF3 ‖2 − ‖ hF2 ‖2 ≤ ζ2 ‖ h ‖2,

0 ≤‖ hF4 ‖2 − ‖ hF3 ‖2 ≤ ζ3 ‖ h ‖2, · · ·

0 ≤‖ hFt ‖2 − ‖ hFt−1 ‖2 ≤ ζ t−1 ‖ h ‖2 .

(21)

As 1 > ζ > ζ2 > ζ3 > ζ4 · · · , the energy difference between two consecutive iterationsconverges

to a small positive value which is related to the RIP parameter.

B. Proof of Corollary 1

Proof: Since both the SAMP and AS-SAMP algorithms use thepreliminaryandfinal tests,

the upper-bounded number of iterations inLemma 2can also be applied to the SAMP algorithm.

Consider the same target signal for both algorithms and according to Lemma 2we have

ntotal ≤ − log(|hmin|‖ h ‖ )(

−1

log(CK1)+

−1

log(CK2)+ ...+

−1

log(CKJ)) + J.

As 0 < δ3K1≤ δ3K2

... ≤ δ3KJ< 0.06, then 0 < −1

log(CK1)≤ −1

log(CK2)≤ ... ≤ −1

log(CKJ). Thus

we have −1log(CK1

)+ −1

log(CK2)+ ... + −1

log(CKJ)≤ −J

log(CKJ), and thereforentotal ≤ −J log(

|hmin|

‖h‖)

− log(CKJ)

+ J .

With the same target signal and the total number of stages, the upper bound only depends

on CKJ. According to Appendix A,0 < δ3Ks−ASSAMP

≤ δ3Ks−SAMP< 0.06, and therefore,

CKs−ASSAMP≤ CKs−SAMP

. Clearly,0 < −1log(CKs−ASSAMP

)≤ −1

log(CKs−SAMP)

and−J log(

|hmin|

‖h‖)

− log(CKs−ASSAMP)+

J ≤ −J log(|hmin|

‖h‖)

− log(CKs−SAMP)+ J .

For example, givenδ3Ks−ASSAMP= 0.01 which leads tolog(CKs−ASSAMP

) = −5.59 and the

total number of stages is5. Supposelog( |hmin|‖h‖ ) = −7, the upper bound of the number of iterations

DRAFT September 10, 2014

Page 26 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 28: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 27

that the proposed algorithm involves is11. On the other hand, with the sameh andJ , suppose

δ3Ks−SAMP= 0.05, the upper bound of the number of iterations for the SAMP algorithm is 16.

APPENDIX C

PROOF OFTHEOREM 3

Theorem 3:Assume that the measurement matrixA is composed byP rows ofD where the

(m,n) element ofD is given by[D]m,n = 1√Ne−j2πmn/N (0 ≤ m ≤ N − 1 and0 ≤ n ≤ L− 1),

and the indices set of the selected rows isΩ. All the pilot symbols are equal-powered to be

EP = |X(kc)|2, c = 1, 2, ...P . If Ω is a CDS with parameters (P,N, λ), then the mutual

coherence of the resulted measurement matrix,µ(A), is minimized. Particularly,λ = P 2−PN−1

and

the minimized value of the mutual coherence isEP

√NP−P 2

N−1.

This is a refined version of the proof presented in [32] and [33], so that the proof is applicable

in the general case.

Proof: Recall in (10), because| < dpi ·dpj > | only depends on∆ = i− j and‖ dpi ‖=‖

dpj ‖= 1, designing the optimal pilot pattern can be formulated as:

Ωopt = argminΩ

max1≤∆≤L−1

EP | < dpi · dpi+∆ > |

= argmin max1≤∆≤L−1

EP |P∑

r=1

ωpr·∆|.(22)

where ω = e−j 2πN . To maximize |∑P

r=1 ωpr·∆| is equivalent to maximize|∑P

r=1 ωpr·∆|2 =

∑Pm=1 ω

pm·∆ ·∑Pn=1 ω

−pn·∆, andEP is a constant, therefore (22) can be re-written as:

Ωopt = argminΩ

max1≤∆≤L−1

P∑

m=1

P∑

n=1

ω(pm−pn)∆. (23)

It is worth noting that∑P

m=1

∑Pn=1 ω

(pm−pn)∆ in above equation is a complex number generally.

Thus, to make it applicable in the general case, a revision ismade as follows:

Ωopt = argminΩ

max1≤∆≤L−1

P︸︷︷︸

m=n

+P∑

m=1

P∑

n=1

Re[ω|pm−pn|∆]

︸ ︷︷ ︸

m6=n

= argminΩ

max1≤∆≤L−1

P︸︷︷︸

m=n

+P∑

m=1

P∑

n=1

cos(2π

N|pm − pn|∆)

︸ ︷︷ ︸

m6=n

.

(24)

September 10, 2014 DRAFT

Page 27 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 29: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

28 IEEE TRANSACTIONS ON COMMUNICATIONS

In above equation, the mutual coherence of the resulted matrix depends not only on the space

between two columns but also on the space between two pilots.Define a setG = (pm − pn)

modN |1 ≤ m,n ≤ P,m 6= n which containsN different numbersg = 1, 2, ...N − 1 and each

number repeatsλg times. (24) can be re-written as:

Ωopt = argminΩ

max1≤∆≤L−1

EP ( P︸︷︷︸

m=n

+

N−1∑

g=1

λg cos(2π

Ng∆)

︸ ︷︷ ︸

m6=n

).(25)

Problem in above equation is equivalent to the problem of finding the optimal pilot patter which

minimizes the maximum value of∑N−1

g=1 λg cos(2πNg∆), 1 ≤ ∆ ≤ L−1. To show that the patterns

based on CDS are optimal,λ1 = λ2 = ... = λN−1 needs to be satisfied. Because

max1≤∆≤L−1

N−1∑

g=1

λg cos(2π

Ng∆) ≥

∑L−1∆=1(

∑N−1g=1 λg cos(

2πNg∆))

L− 1, (26)

equality happens whenλ1 = λ2 = ... = λN−1 = P 2−PN−1

, and the minimum value of the mutual

coherence is

µ(A)min = EP

P +P 2 − P

N − 1

∑L−1∆=1(

∑N−1g=1 cos(2π

Ng∆))

L− 1,

= EP

PN − P 2

N − 1.

(27)

REFERENCES

[1] L. Hanzo, Y. Akhtman, L. Wang, and M. Jiang, MIMO-OFDMfor LTE,WIFI and WIMAX : Coherent versus non-

Coherent and Cooperative Turbo-Transceivers. John Wiley and IEEE Press, Dec. 2011.

[2] M. Sablatash, “Transmission of all-digital television: state of the art and future directions,”IEEE Trans. Broadcast., vol. 40,

no. 2, pp. 102–121, Jun. 1994.

[3] B. Li, S. Zhou, M. Stojanovic, L. Freitag, and P. Willett,“Multicarrier communication over underwater acoustic channels

with nonuniform doppler shifts,”IEEE J. Oceanic Eng., vol. 33, pp. 198–209, Apr. 2008.

[4] S. Zhou and Z. Wang, OFDMfor Underwater Acoustic Communications. John Wiley & Sons, Jun. 2014.

[5] H. Arslan and G. Bottomley, “Channel estimation in narrowband wireless communication systems,”Wireless Commun.

and Mobile Comput., vol. 1, no. 2, pp. 201–219, Apr. 2001.

[6] S. Coleri, M. Ergen, A. Puri, and A. Bahai, “Channel estimation techniques based on pilot arrangement in OFDM systems,”

IEEE Trans. Broadcast., vol. 48, no. 3, pp. 223–229, Sep. 2002.

DRAFT September 10, 2014

Page 28 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 30: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 29

[7] J. van de Beek, O. Edfors, M. Sandell, S. Wilson, and P. Borjesson, “On channel estimation in OFDM systems,” inProc.

IEEE 45th Veh. Technol. Conf., Chicago, IL, Jul. 1995, pp. 815–819.

[8] W. Bajwa, J. Haupt, A. Sayeed, and R. Nowak, “Compressed channel sensing: A new approach to estimating sparse

multipath channels,”Proc. IEEE, vol. 98, no. 6, pp. 1058–1076, Jun. 2010.

[9] I. Fevrier, S. Gelfand, and M. Fitz, “Reduced complexitydecision feedback equalization for multipath channels with large

delay spreads,”IEEE Trans. Commun., vol. 47, no. 6, pp. 927–937, Jun. 1999.

[10] S. Cotter and B. Rao, “Sparse channel estimation via matching pursuit with application to equalization,”IEEE Trans.

Commun., vol. 50, no. 3, pp. 374–377, Mar. 2002.

[11] M. Stajanovic, “Retrofocusing techniques for high rate acoustic communications,”J. Acoust. Soc. Am., vol. 117, no. 3, pp.

1173–1185, Mar. 2005.

[12] S. Ariyavisitakul, N. Sollenberger, and L. Greenstein, “Tap-selectable decision-feedback equalization,”IEEE Trans.

Commun., vol. 45, no. 12, pp. 1497–1500, Dec. 1997.

[13] G. Gui, W. Peng, and F. Adachi, “Improved adaptive sparse channel estimation based on the least mean square algorithm,”

in Proc. IEEE on Wireless Commun. and Networking Conf. 2013 (WCNC’13), Shanghai, China, Apr. 2013, pp. 3105–3109.

[14] C. Carbonelli, S. Vedantam, and U. Mitra, “Sparse channel estimation with zero tap detection,”IEEE Trans. Commun.,

vol. 6, no. 5, pp. 1743–1763, May 2007.

[15] C. Berger, S. Zhou, J. Preisig, and P. Willett, “Sparse channel estimation for multicarrier underwater acoustic communi-

cation: From subspace methods to compressed sensing,”IEEE Trans. Signal Process., vol. 58, no. 3, pp. 1708–1721, Mar.

2010.

[16] E. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete

frequency information,”IEEE Trans. Inf. Theory, vol. 52, pp. 489–509, Feb. 2006.

[17] E. Candes and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies,”IEEE Trans.

Inf. Theory, vol. 52, pp. 5406–5425, Dec. 2006.

[18] D. Donoho, “Compressed sensing,”IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, Apr. 2006.

[19] R. Baraniuk, “Compresseive sensing,”IEEE Signal Process. Mag., vol. 24, no. 4, pp. 118–121, Jul. 2007.

[20] “Compressive sensing resources,” Huston, TX, http://www.dsp.ece.rice.edu/cs/.

[21] G. Taubock, F. Hlawatsch, D. Eiwen, and H. Rauhut, “Compressive estimation of doubly selective channels in multicarrier

systems: Leakage effects and sparsity-enhancing processing,” IEEE J. Sel. Topics Signal Process., vol. 4, no. 2, pp. 255–271,

Apr. 2010.

[22] C. Berger, Z. Wang, J. Huang, and S. Zhou, “Application of compressive sensing to sparse channel estimation,”IEEE

Commun. Mag., vol. 48, no. 11, pp. 164–174, Nov. 2010.

[23] H. Die, W. Xiaodong, and H. Lianghua, “A new sparse channel estimation and tracking method for time-varying OFDM

systems,”IEEE Trans. Veh. Technol., vol. 62, no. 9, pp. 4648–4653, Nov. 2013.

[24] R. Prasad, C. R. Murthy, and B. D. Rao, “Joint approximately sparse channel estimation and data detection in OFDM

systems using sparse bayesian learning,”IEEE Trans. Signal. Process., vol. 62, no. 14, pp. 3591–3603, Jul. 2014.

September 10, 2014 DRAFT

Page 29 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 31: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

30 IEEE TRANSACTIONS ON COMMUNICATIONS

[25] J. Tropp and A. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,”IEEE Trans. Inf.

Theory, vol. 53, no. 12, pp. 4655–4666, Dec. 2007.

[26] D. Needell and J. Tropp, “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,”Commun. ACM:

Research Highlights section, vol. 53, no. 12, pp. 93–100, Dec. 2010.

[27] T. Do, N. N. G. Lu, and T. Tran, “Sparsity adaptive matching pursuit algorithm for practical compressed sensing,” inProc.

the 42nd Asilomar Conf. on Signals, Syst. and Comput., Pacific Grove, CA, Oct. 2008, pp. 581–587.

[28] X. Bia, X. Chen, and Y. Zhang, “Variable step size stagewise adaptive matching pursuit algorithm for image compressed

sensing,” inProc. IEEE Int. Conf. on Signal Process., Commun. and Comput. (ICSPCC), Kunming, China, Aug. 2013,

pp. 1–4.

[29] X. He, R. Song, and W. Zhu, “Optimal pilot pattern designfor compressed sensing-based sparse channel estimation in

OFDM systems,”J. Circuits, Syst., and Signal Process., vol. 31, no. 4, pp. 1379–1395, Aug. 2012.

[30] L. Applebaum, W. Bajwa, A. Calderbank, J. Haupt, and R. Nowak, “Deterministic pilot sequences for sparse channel

estimation in OFDM systems,” inProc. the 17th Int. Conf. on Digital Signal Process. (DSP), Corfu, Greece, Jul. 2011,

pp. 1–7.

[31] C. Qi and L. Wu, “Optimized pilot placement for sparse channel estimation in OFDM systems,”IEEE Signal Process.

Lett., vol. 18, no. 12, pp. 749–752, Dec. 2011.

[32] P. Pakrooh, A. Amini, and F. Marvasti, “OFDM pilot allocation for sprae channel estimation,”EURASIP J. Adv. Signal

Process., vol. 2012, no. 59, pp. 1–9, Mar. 2012.

[33] C. Qi and L. Wu, “A study of deterministic pilot allocation for sparse channel estimation in OFDM systems,”IEEE

Commun. Lett., vol. 16, no. 5, pp. 742–744, May 2012.

[34] M. Gay, A. Lampe, and M. Breiling, “Sparse OFDM channel estimation based on regular pilot grids,” inProc. 9th Int.

Conf. on Syst., Commun. and Coding (SCC), Munich, Germany, Jan. 2013, pp. 1–6.

[35] J. Chen, C. Wen, and P. Ting, “An effcient pilot design scheme for sparse channel estimation in OFDM systems,”IEEE

Commun. Lett., vol. 17, no. 7, pp. 1352–1355, Jul. 2013.

[36] C. Qi, G. Yue, L. Wu, Y. Huang, and A. Nallanathan, “Pilotdesign schemes for sparse channel estimation in OFDM

systems,”IEEE Trans. Veh. Technol., no. 99, pp. 1–13, Jun. 2014.

[37] C. J. Colbourn and J. H. Dinitz, “Other combinatorial designs,” inHandbook of Combinatorial Designs, second edition ed.

CRC Press, 2007, ch. 5, pp. 392–436.

[38] J. Myers and A. D. Well, inResearch Design and Statistical Analysis, 2nd ed. Lawrence Erlbaum, 2003.

[39] D. Wei and O. Milenkovic, “Subspace pursuit for compressive sensing signal reconstruction,”IEEE Trans. Inf. Theory,

vol. 55, no. 5, pp. 2230–2249, May 2009.

DRAFT September 10, 2014

Page 30 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 32: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

IEEE TRANSACTIONS ON COMMUNICATIONS 1

An Improved Compressed Sensing-BasedChannel Estimation Algorithm with

Near-optimal Pilot PlacementYi Zhang, Ramachandran Venkatesan, Octavia A. Dobre, and Cheng Li

Faculty of Engineering and Applied ScienceMemorial University, NL, Canada A1B 3X5

Email: yz7384, venky, odobre, [email protected]

Abstract—This paper presents an improved recoveryalgorithm based on sparsity adaptive matching pursuit(SAMP) with near-optimal pilot placement, for compressedsensing (CS)-based sparse channel estimation in orthogonalfrequency division multiplexing (OFDM) communicationsystems. Compared with other state-of-the-art recoveryalgorithms, the proposed algorithm possesses the featureof SAMP of not requiring a priori knowledge of thesparsity, and moreover, adjusts the step size adaptively toapproach the true sparsity. Furthermore, different pilotarrangements result in different measurement matricesin CS and thus, can affect the estimation accuracy. Itis known that by minimizing the mutual coherence ofthe measurement matrix when the signal is sparse onthe unitary discrete Fourier transform (DFT) matrix, theoptimal set of pilot locations is a cyclic difference set(CDS). Based on this, we propose an efficient near-optimalpilot placement scheme in cases where CDS does notexist. Simulation results show that the proposed channelestimation algorithm, with the new pilot placement scheme,offers a better trade-off between the performance in termsof mean squared error (MSE) and bit error rate (BER)and the complexity, when compared to other estimationalgorithms.

Index Terms—Sparse channel estimation, compressedsensing/compressive sensing, sparsity adaptive matchingpursuit, pilot placement, cyclic difference set.

I. I NTRODUCTION

Orthogonal frequency division multiplexing (OFDM)has been widely adopted in various wireless communi-cation standards, such as worldwide interoperability formicrowave access (WiMAX), long term evolution (LTE)[1], and high definition television (HDTV) broadcastingstandards [2], due to its high data rate, efficient spectralutilization and ability to cope with multipath fading.Its application in underwater acoustic (UWA) commu-nications has been exploited in recent years [3], [4]. In

This work is supported in part by the Natural Science and En-gineering Research Council (NSERC) of Canada and Research andDevelopment Corporation Newfoundland and Labrador (RDC).

coherent digital wireless systems, obtaining accurate es-timates of the channel state information (CSI) is criticalat the receiver [5]. The data-aided channel estimationin OFDM communication systems can be performedby either inserting pilot tones into certain subcarriersof each OFDM symbol, or by using all subcarriers ofOFDM symbols as pilots within a specific period [6].Conventional methods for CSI estimation, such as leastsquare (LS) and minimum mean-square (MMSE) [7],cannot exploit the sparsity of the wireless channels,and they often lead to the excessive-utilization of thespectral and energy resources. Recently, studies havesuggested that many multipath channels tend to exhibit asparse structure in the sense that the majority of channelimpulse response (CIR) taps end up being either zero orbelow the noise floor [8]. A few examples include: a)in North American HDTV broadcasting standard, thereare only a few significant echoes over a typical delayspread [9], [10]; b) UWA channels are characterized bya few dominant echoes over larger time dispersion (in theorder of hundreds of milliseconds) [4], [11]; c) channelsof broadband wireless systems in hilly environment alsoexhibit the sparse CIR [12], [13]. As opposed to thetraditional methods, channel estimation exploiting thesparsity of the channels reduces the required numberof pilots, and thus effectively improves the spectral andenergy efficiency [4], [8], [10], [14], [15].

More recently, advances in the new field of com-pressed sensing (CS) [16]–[18] have gained a fast-growing interest in signal processing and applied math-ematics [19], [20]. It has been shown in the literaturethat CS can be applied to sparse channel estimation [4],[8], [15], [21]–[24]. Unlike traditional channel estima-tion methods, CS allows accurate reconstruction of thesignal which is sparse on a certain basis, from a smallnumber of random linear projections/measurements [18].To ensure an accurate or even exact reconstruction ofthe target signal, a proper reconstruction algorithm anda properly designed measurement matrix are essential.

September 10, 2014 DRAFT

Page 31 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 33: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

2 IEEE TRANSACTIONS ON COMMUNICATIONS

Existing algorithms to recover a target sparse signalare generally grouped in two categories: linear program-ming (LP) and dynamic programming (DP). The basispursuit (BP) method in LP achieves a good MSE per-formance; however, its high computational complexitymakes it less attractive to real large-scale applications.The orthogonal matching pursuit (OMP) [25] algorithm,however, is the most popular algorithm in DP [15],[22]. An improved OMP variant, referred to as thecompressed sampling matching pursuit (CoSaMP) wasproposed in [26], with the MSE performance close tothat of the BP algorithm and complexity lower than thatof the OMP algorithm. However, all these algorithmsrequire the knowledge of the channel sparsity, which isoften not available in practical applications. Recently,the sparsity adaptive matching pursuit (SAMP) algorithmwas proposed to address this issue [27]. While CoSAMPand OMP require the level of sparsity asa prioriinformation to determine the number of iterations of thealgorithms, SAMP uses a stage-based iterative approachto estimate the sparsity, where a fixed preset step sizeis used at each consecutive stage. The results showedthat SAMP can outperform the OMP algorithm and itsvariants; however, its MSE performance and complexityare affected by the choice of the step size. Recently, astage-wise algorithm which uses different step sizes fordifferent stages is proposed in [28]. However, it is nota truly adaptive algorithm, as the change of step sizesdepends on a specific relation between the number ofmeasurements and the sparsity level.

In this paper, we propose a novel CS-based reconstruc-tion algorithm based on the SAMP algorithm, referredto as the adaptive step size SAMP (AS-SAMP), whichcan adaptively adjust the step size to achieve fast conver-gence. We provide simulation results for sparse channelestimation in UWA-OFDM systems to demonstrate thebetter MSE and BER performance for channel estimationusing the proposed algorithm. It is also worth noting thatthe good performance is achieved without increasing thecomplexity significantly when compared with the othermentioned recovery algorithms. Furthermore, becausedifferent pilot placement choices will result in differentCS measurements, the result will directly affect theperformance of channel estimation algorithms. Equallyspaced pilots are in general optimal for conventionalchannel estimation methods, which are, however, nottrue in CS-based methods [29]. In existing studiesrelated to CS, randomly and deterministically placedpilot tones are mostly reported [29]–[35]. Although anexhaustive search of all possible combinations of thepilot indices guarantees the optimal pilot pattern, thecomputational complexity increases exponentially as thesearching space expands. Moreover, provided a partialDFT measurement matrix, it is known that if the pilotindices set is a cyclic difference set (CDS), the mutual

coherence of the measurement matrix is minimized [32],[33], [36]. However, it is not guaranteed that a CDS willexist for every pilot size. In this paper, we investigatethe problem of pilot placement based on the CDS. Whenthe CDS does not exist, we propose a novel pilot patternselection scheme which relies on the concatenated CDSwith an iterative tail search (C-CDS with TS). Becausethe proposed design is deterministic, it is more effi-cient than any other search-based methods. Simulationresults demonstrate that improvement in MSE and BERperformance can be achieved using the proposed pilotplacement scheme, when compared to the randomlyscattered pilots. The following are the main contributionsof this paper:

• We conduct a comparative analysis of the perfor-mance of existing reconstruction methods in termsof estimation accuracy and computational complex-ity. Furthermore, we propose a new reconstructionalgorithm for CS applications wherea priori knowl-edge of sparsity is not required.

• We propose an efficient scheme for near-optimalpilot placement to meet the requirement of the mea-surement matrix for a satisfactory reconstruction.

The remainder of this paper is organized as follows. InSection II, a review of CS fundamentals and the systemmodel are presented. In Section III, an improved re-construction algorithm for sparse channel estimations isproposed and a comparison with the existing reconstruc-tion algorithms is performed. Section IV introduces anovel pilot placement scheme based on the concatenatedCDS, with an iterative tail search. Section V presents thesimulation results and performance evaluation. SectionVI concludes the paper.

II. CS FUNDAMENTALS AND SYSTEM MODEL

The following notation will be used for the rest ofthe paper. A bold symbol represents a set, a vectoror a matrix, and a capital letter stands for frequencydomain representation.XT denotes the transpose ofX,X† denotes the Moore-Penrose pseudo-inverse matrix ofX, which is defined as(XHX)−1XH , with XH as theHermitian ofX, ‖ X ‖ and‖ X ‖1 denotes theℓ2 normand theℓ1 norm ofX, respectively.

A. CS Fundamentals

We consider a contaminated measurement vector ob-tained through

y = Φx+w, (1)

where x ∈ RN is a real-valued, one-dimensional,

discrete-time signal vector,Φ is the sensing matrix, andw ∈ R

N is a stochastic error term with bounded energy‖ w ‖< ε [18]. Assuming thatx can be expanded in anorthonormal basisΨ as

x = Ψα, (2)

DRAFT September 10, 2014

Page 32 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 34: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 3

whereα is theN × 1 coefficients vector, the signalxis K-sparse if and only ifK coefficients (K ≪ N ) inα are non-zero while the remaining coefficients are zeroor negligibly small. Substituting (2) in (1), one obtains

y = ΦΨα+w = Aα+w, (3)

where A is referred to as themeasurement matrix.Essentially, CS states thatx can be recovered, withhigh probability, by solving the under-determined prob-lem in (3). The reliability of recovery depends on twoconstraints: 1)α is sparse; 2)A satisfies the restrictisometry property (RIP) [16]–[18], which means thatfor an arbitrary levelδ ∈ (0, 1) and any index setI ∈ 0, 1, ..., N − 1 such that|I| ≤ K, where | · |denote the cardinality of the set, and for allα ∈ R

|I|,the following relation holds:

(1 − δ) ‖ α ‖2≤‖ AIα ‖2≤ (1 + δ) ‖ α ‖2, (4)

whereAI is the matrix containing the columns of whichthe indices are elements of the setI. An estimator ofαin (3) can be achieved by solving a convex optimizationproblem, which is formulated as [17]

α = argmin ‖ α ‖1, subject to‖ y −Aα ‖≤ ε, (5)

for a givenε > 0. If A satisfies RIP andα is sufficientlysparse, the norm of the reconstruction error is boundedby ‖ α−α ‖≤ Cε, whereC depends on the RIP relatedparameterδ of A rather thanα [16], [17]. Particularly,if a measurement matrix is composed of random rowsin an N × N DFT matrix, and ifM > CδK logN , aK-sparse signal can be reconstructed with probability ofat least1−O(N−δ), whereδ is the constant in the RIP,andCδ is approximately linear withδ [16].

B. System Model

We consider anN -subcarrier OFDM system in whichP subcarriers are used as pilots. The symbols transmittedon the kth subcarrier,X(k), 0 ≤ k ≤ N − 1, areassumed to be independent and identically distributedrandom variables drawn from a phase-shift keying (P-SK) or quadrature amplitude modulation (QAM) signalconstellation. Assume that the time-invariant multipathchannel having the impulse response

h(n) =

Np−1∑

p=0

ηpδ(n− τp), (6)

whereNp is the number of paths, andηp and τp arethe amplitude gain and the delay associated with thepthpath, respectively. The vector of received signal afterdiscrete Fourier transform (DFT) is expressed as

Y = XH +W = XDh+W, (7)

where X is an N × N diagonal matrix with theelements X(k), 0 ≤ k ≤ N − 1, on themain diagonal,Y = [Y (0), Y (1), ..., Y (N − 1)]T .H = [H(0), H(1), ..., H(N − 1)]T and W =[W (0),W (1), ...,W (N − 1)]T are the frequency re-sponse vector of the channel and additive white Gaussian

noise (AWGN), respectively.h = [h(0), h(1), ..., h(L−1)]T . The (m,n) element ofD is given by [D]m,n =1√Ne−j2πmn/N , where0 ≤ m ≤ N − 1 and 0 ≤ n ≤

L−1. After extracting the pilot subcarriers, we can writethe following input-output relationship

Yp = XpDph+Wp = Ah+Wp, (8)

where Yp = SY, Xp = SXST , Dp = SD,Wp = SW, and S is a P × N matrix for selectedpilot subcarriers. In addition,A = XpDp is a P × Lmatrix, referred to as the measurement matrix. The goalof CS-based channel estimation is to estimateh fromthe received pilotYp, given the measurement matrixA.

III. PROPOSEDAS-SAMP WITH APPLICATION TO

SPARSECHANNEL ESTIMATION

A. Comparison of Reconstruction Algorithms in the Lit-erature

The reconstruction algorithms considered in this paperare orthogonal matching pursuit (OMP), compressedsampling matching pursuit (CoSaMP), and sparsity adap-tive matching pursuit (SAMP). Fig. 1 depicts the corre-sponding flow charts, whereCt, Ft, rt, andK denotethe candidate support set, the final support set, theresidual vector in thetth iteration, and the sparsity ofthe target signal, respectively. As seen from Fig. 1, in theOMP algorithm,Ft is expanded by adding coordinatessuccessively and it uses only one maximum correlationtest to add one coordinate toFt. On the other hand, theCoSaMP algorithm refines a fixed-sizeFt by selectingcoordinates from a set of candidatesCt. It uses apreliminary correlation testand afinal correlation test,which are simplified topreliminary testand final test,to add one or more coordinates toFt. The final testremoves the wrong coordinates added in thepreliminarytest, which is referred to asbacktracking, and thereforeimproves the accuracy of the estimation [26].

However, most natural signals are compressible ratherthan strictly sparse. The sparsityK for these signalscould not be well-defined. It is shown that the reconstruc-tion accuracy can be significantly degraded as we eitherunderestimateor overestimateK [27]. Unlike the otheralgorithms, SAMP does not requirea priori knowledgeof K. It adopts a stage-wise approach to identifyFt

through the backtracking strategy. The size ofFt staysthe same among iterations in each stage, however, whenit moves to the next stage, the size ofFt is increasedby a fixed step sizes to search for more coordinatesof the recovered signal which correspond to the leastresidual. This process continues until the residual of therecovered signal falls below a predetermined threshold.Although SAMP guarantees exact recovery after a finitenumber of iterations (see proof in [27]), it leaves anopen question about the choice of the step sizes toachieve the trade-off between accuracy of estimation and

September 10, 2014 DRAFT

Page 33 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 35: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

4 IEEE TRANSACTIONS ON COMMUNICATIONS

Initialization

Maximum

Correlation Test

Computing rt

Merging Final

Support Set

Exit

Yes

Current iteration

< K ?

No

Initialization

Preliminary Maximum

Correlation Test

Exit

Yes

Final Test

Ct( fixed size )

Ft( fixed size )

Halting condition?

No

Initialization

Preliminary Maximum

Correlation Test

Halting condition?

No

Exit

Yes

Final Test

Ft( adaptive size )

Updating Ftand r

t

Yes

No

Updating

Stage, Ft

and rt

Ft-1

Computing Signal

Estimation

Merging Final

Support Set

Ft-1

Computing Signal

Estimation and rt

Yes

Ft-1: Final support set from the previous iteration

Ft: Final support set at the current iteration

rt-1: Residual from the previous iteration

rt: Residual at the current iteration

Ct: Candidate support set at the current iteration

Computing Signal

Estimation and rt

Ft-1

Merging Final

Support Set

Ct( adaptive size )

OMP CoSaMP SAMP

<t t -1r r ?

Fig. 1. Flow charts of the OMP, CoSaMP, and SAMP algorithms.

complexity. This motivates us to address the problem ofadaptively adjusting the step size between consecutivestages. Recently, a variable step size algorithm has beenproposed in [28]. However, the increment of the stepsize is based on a particular relationship between thenumber of the measurements and the sparsity, which isnot always valid in applications. Therefore, we propose anovel AS-SAMP algorithm which can adaptively adjustthe step size; this will be presented subsequently.

B. Proposed AS-SAMP Algorithm

Since a smaller step sizes in the SAMP algorithmleads to a better estimation accuracy while the complex-ity increases, and a largers degrades the accuracy of theestimation while the complexity decreases, an adaptivelyadjusteds may lead to a better trade-off between the ac-curacy of estimation and the complexity of the algorithm.Specifically, an adaptively adjusteds means that thechange ofs depends on how far the current reconstruc-tion state, e.g., the current reconstructed signal energy,or the estimated sparsity of the current reconstructedsignal, is from the state of the true signal. Because thesparse elements with large values are reconstructed inthe initial stages of the algorithm, the energy differenceof the reconstructed signal between consecutive stagesis reduced at a declining rate as the number of stagesincreases. In other words, the energy of the reconstructedsignal tends to be stable when the estimated sparsity isclose to the true sparsityK. Following this property,we propose the AS-SAMP algorithm; to expedite theconvergence, the algorithm begins with a larger stepsize (the initial step size is denoted assI ) which is

adaptively decreased to provide fine tuning in later stagesas the change rate of the reconstructed signal’s energydecreases. Consequently, an additional thresholdΓ isused to specify the beginning of the fine tuning. Thepseudocode for the proposed algorithm is presented asAlgorithm 1 . The algorithm is also stage-wise with avariable size ofFt in different stages. During a stage,it adopts two correlation tests iteratively, i.e., candidateand final tests, to search a certain number of coordinatescorresponding to the largest correlation values betweenthe signal residual and the columns of the measurement

Algorithm 1 AS-SAMPInput: Received signal at pilot subcarriersYp, mea-

surement matrixA, toleranceǫ, thresholdΓ, initialstep sizesI ;

1: Initialize h = [0, 0..., 0]T , hold = [0, 0..., 0]T ,rtemp = [0, 0..., 0]T , indices setD0 = ∅, candidatesupport setC0 = ∅, residualr0 = Yp, size of finalsupport setL = s = sI , final support setF0 = ∅,iteration indext = 1

2: while (‖rt−1‖ > ǫ) do3: Calculate signalSP = |AHrt−1|4: Select indices setDt in A corresponding to the

L largest elements inSP Preliminary test5: Merge chosen indices and final support set from

previous iteration into candidate support setCt =Dt ∪ Ft−1

6: Refine candidate set to final setFt by selectingindices corresponding to theL largest elements of|A†

CtYp| Final test7: Solve least-square problemh(Ft) = A

†FtYp

8: Calculate current residualrtemp = Yp −AFtA

†FtYp

9: if (‖rtemp‖ < ǫ) then10: rt = rt−1

11: Break12: else if (‖rtemp‖ ≥ ‖rt−1‖) then13: if (‖ h(Ft) ‖ − ‖ hold ‖< Γ) then14: s = ⌈s/2⌉, L = L + s, hold = h(Ft), rt =

rt−1, t = t+ 1 Fine tuning15: else16: L = L + s, hold = h(Ft), rt = rt−1, t =

t+ 1 Fast approaching17: end if18: else19: Ft−1 = Ft, rt = rtemp, t = t+ 120: end if21: end while22: return h

Output: Estimation of baseband channel impulse re-sponseh.

DRAFT September 10, 2014

Page 34 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 36: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 5

matrix. Then, the algorithm moves to the next stage untilthe recovered signal with the least residual is found. Asopposed to SAMP, the proposed algorithm incorporatestwo threshold values into the halting criterion: toleranceǫ andΓ. Therefore, AS-SAMP halts when the residual’snorm is smaller thanǫ, in which ǫ is set to be the noiseenergy. Meanwhile,s is decreased when the energy dif-ference of the reconstructed signal falls belowΓ, whosevalue is chosen based on empirical observations. Startingwith a sufficiently large initial step size (sI ≤ K), thealgorithm quickly approaches the target signal. However,when the difference in the energy of the reconstructedsignals becomes smaller than the presetΓ, the step sizeis reduced (by a factor of two) to avoidoverestimationof the K-sparse target signal. Thisoverestimationcansignificantly degrade the accuracy of the algorithm [27].Theoretical guarantee of exact recovery, in both noiselessand noisy cases, of AS-SAMP are provided with thecorresponding proofs in Appendix A.

C. Computational Complexity

In this section, the computational complexity of theexisting algorithms in the literature and the proposedalgorithm is compared in terms of the number of opera-tions. Among the steps of each algorithm, theMaximumCorrelation Testdominates the contribution to the com-plexity by the cost of the multiplicationAHrt; therefore,the number of operations in other steps are not consid-ered in this comparison. In addition, the complexity ofthe algorithms also depends on the number of iterations.Since SAMP takes a finite number of iterations up tostage⌈K/s⌉, and during each stage a portion of coor-dinates in the true support set are identified and refinedvia up toK iterations, an upper bound of the number ofiterations is⌈K/s⌉K. On the other hand, due to the fastconvergence of the AS-SAMP algorithm, fewer stagesare required to provide the same quality of estimates,and as the computational complexity of each step is thesame, the AS-SAMP algorithm is less complex whencompared with the SAMP algorithm. An upper boundof the number of iterations of AS-SAMP is⌈K/s⌉K.Note that the upper bounds obtained for SAMP and AS-SAMP are quite loose, as the number of iterations whichvaries from a stage to another is likely to be equal to orsmaller thans for most of the stages. Thus, we presentan improved upper-bounded number of iterations for theAS-SAMP algorithm in Lemma 2, Appendix A and weshow that the upper-bounded number of iterations issmaller than that for the SAMP algorithm in Corollary 1,Appendix A. Moreover, according to [25], the numberof iterations which the OMP algorithm involves is atleast K, and therefore roughlyKPN operations areneeded. For CoSaMP, the number of operations is upperbounded byKPN [26]. Note that a larger number ofiterations are needed for OMP as only one coordinate

is identified during each iteration, and it lacks proofof theoretical guarantee of reconstruction quality, whileCoSaMP, SAMP exhibit low reconstruction complexityand offer theoretical guarantees of reconstruction quality[26], [27]. A summary of the computational complexityof the considered algorithms is provided in Table I.

IV. PROPOSEDPILOT PLACEMENT BASED ON

CONCATENATED CDS

A. Problem Statement

According to the CS theory, an accurate recovery ofa sparse signal relies on the RIP of the measurementmatrix. However, the RIP evaluation for a particularmatrix is a non-deterministic polynomial hard (NP)problem [18]. An alternative property which evaluates ifa measurement matrix can well preserve the informationof the sparse signal in the measurements is the mutualcoherence of the measurement matrix [16]–[18]. In (8),the measurement matrix is the product of the transmittedpilots and the DFT submatrix and is determined byboth the symbols on the pilot subcarriers and the setof pilot location indices, which is also referred to asthe pilot placement/arrangement. In this section, wefocus on the pilot placement by assuming that the samesymbol is transmitted on all the pilot subcarriers. Themutual coherence of aP × L measurement matrixAis defined as the maximum absolute correlation betweenany two normalized columns, which is

µ(A) = max1≤i,j≤L, i6=j

| < aaai · aaaj > |‖aaai‖ · ‖aaaj‖

. (9)

Given the equal-powered pilots, and substitutingA withXpDp, (9) becomes

µ(A) = max1≤i,j≤L, i6=j

|X(kc)|2| < dpi · dpj > |‖dpi‖ · ‖dpj‖

, (10)

where c = 1, 2, ...P and dpi denotes theith columnof Dp with the mth element given by 1√

Ne−j2πikm/N ,

m = 1, 2, ..., P . We aim to find the set of pilot locationindices Ω = k1, k2, ..., kP which minimizesµ(A).Several methods have been suggested to search the suit-able solutions iteratively [29], [32], [33]; however, thecomplexity of these methods is potentially high becausethe searching space grows rapidly (even exponentially)as the numbers of the total subcarriers and the pilotsubcarriers increase. In the following section, we proposea pilot placement scheme which aims to provide a near-optimal solution without suffering from the fast-growingcomplexity.

B. Proposed Pilot Placement based on the ConcatenatedCDS

Suppose that the measurement matrixA is composedof P rows of theN × L partial DFT matrixD in (7)and the indices set of the selected rows isΩ, and allthe pilot symbols are equal-powered. According to [32],

September 10, 2014 DRAFT

Page 35 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 37: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

6 IEEE TRANSACTIONS ON COMMUNICATIONS

TABLE ICOMPUTATIONAL COMPLEXITY.

Methods OMP CoSaMP SAMP‡ AS-SAMP‡

KPN ≤ KPN ≤ [−J log( |hmin|‖h‖ ) −1

log(CKs−SAMP) + J ]PN ≤ [−J log( |hmin|

‖h‖ ) −1log(CKs−ASSAMP

) + J ]PN

‡h is the target signal,hmin is the non-zero element with the minimum magnitude inh, andJ is the total number of the stages.CKs−SAMP

andCKs−ASSAMPare RIP-related parameters for SAMP and AS-SAMP, respectively (see Appendix A for a detailed explanation).

[33], [36], the pilot arrangements based on CDS areoptimal in minimizing the mutual coherence ofA (seeTheorem 3in Appendix C). However, the existence ofCDS is limited for some specific number of subcarriersand pilots. For the cases where there is no CDS, a pilotplacement scheme based on the concatenated CDS isproposed. To begin with, it is important to know thatfor an existing (P,N, λ) CDS of orderP ,1 every non-identity element in the setG of orderN has exactly thesame number of repetitions,λ, andG is the set of cyclicdifferences of any two elements of the CDS [37]. In otherwords, if we denote the number of repetitions of thedifferent elements ofG asλG = λg|g = 1, 2, ...N−1,thenλ1 = λ2 = ...λN−1 = λ which also means that thevariance ofλG is zero. Moreover, it is noticed that a pilotpattern with a smaller variance ofλG is likely to give asmaller mutual coherence of the resulting measurementmatrix and thus, more accurate estimates. Consider anOFDM system withN = 1024, in which P = 256identical pilot symbols are randomly scattered, and thenumber of taps of the sparse CIR isL = 250. In orderto quantize the channel estimation error, we adopt MSE,which is defined as

MSE = E[N∑

m=1

|H(m)− H(m)|2]. (11)

To show that as the variance ofλG increases, it is likelythat so does the mutual coherence ofA and the MSEof estimates, Spearman’s rank correlation is adopted tomeasure the strength of a monotonic relationship (i.e.,values of elements in a vector either increase or decreasewith every increase in an associated vector) betweenpaired vectors [38]. Table II shows the Spearman’srank correlation between any pair of the following fourvectors: the variance ofλG, the mutual coherenceµ(A),and the MSE for both the OMP and AS-SAMP algo-rithms, obtained based on104 pilot patterns;103 OFDMsymbols and10 dB signal-to-noise ratio (SNR) wereconsidered. Spearman’s rank correlation can take values

1An (P,N, λ) cyclic difference set is a subsetD =D(1), D(2), ...,D(P ) of the integers moduloN such that1, 2, ...,N − 1 can each be represented as a difference(D(i) −D(j)), i, j = 1, 2, ...P modulo N in exactly λ different ways. Forexample, the(3, 7, 1) CDS is1, 2, 4 modulo7.

from 1 to −1, with 1 (−1) indicating that two vectorscan be described using a monotonic increase (decrease)function, and0 meaning that there is no tendency forone vector to either increase (decrease) when the otherincreases [38].

It is noted that by concatenating a CDS,2 the varianceof the number of repetitions tends to be small. Fromthese observations, we propose a pilot placement schemebased on the concatenated CDS for pairs of(P,N)where CDS does not exist. First, a CDS needs to bechosen for concatenation according to the ratio of thenumber of the pilots to the number of subcarriers, i.e.,P/N . Specifically, we select the existing CDS withthe parameters(u, v, a) in which u/v is the closestto P/N . For instance, to select indices for 256 pilotsfrom 1024 positions, the (133,33,8) CDS is used. Afterconcatenating the selected CDS, we adopt an iterativeprocedure, which is similar to that in [32], to find the restof pilot positions which minimize the mutual coherenceof the resulting measurement matrix. We referred to thisas to the iterative tail search; the pseudocode for theproposed scheme is shown asAlgorithm 2 . It is worthnoting that the size of the search space is greatly reducedafter concatenation, and hence, the proposed methodconverges much faster when compared to the iterativemethods in [32], [33].

V. SIMULATION RESULTS

A. Simulation Set-up

As UWA channels are inherently sparse, we considerUWA channel estimation for a coded OFDM transmis-sion with N = 1024 subcarriers and bandwidth of9.8 kHz, leading to a subcarrier spacing of9.5 Hz.The CP duration equals to26 ms, which correspondsto the length of CPNCP = 256. Unless otherwisementioned, the number of pilotsP = 256 is assumed.The data symbols are drawn independently from a16-QAM constellation and are coded using a (1024, 512)

2A concatenated CDS is shown in the following example. For the(3, 7, 1) CDS, a concatenated CDS is obtained through1, 2, 4, (1×7 + 1), (1× 7 + 2), (1× 7 + 4), (2× 7 + 1), (2× 7 + 2), (2× 7 +4), ..., (i × 7 + 1), (i× 7 + 2), (i × 7 + 4), ..., wherei ∈ Z

+ andi ≥ 1.

DRAFT September 10, 2014

Page 36 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 38: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 7

TABLE IISPEARMAN’ S RANK CORRELATIONS‡

var(λG)§ µ(A) The average MSE of OMP The average MSE of AS-SAMP

var(λG) 1 0.7475 0.7467 0.7527µ(A) 0.7475 1 0.7481 0.7534

The average MSE of OMP 0.7467 0.7481 1 —The average MSE of AS-SAMP 0.7527 0.7534 — 1

‡ For two vectors of size V,A = [A(1), A(2), ..., A(V )] andB = [B(1), B(2), ..., B(V )], Spearsman’s correlation is

calculated from∑

Vi=1

(ai−a)(bi−b)√∑

Vi=1

(ai−a)2∑

Vi=1

(bi−b)2, whereai and bi are the positions in the ascending order (ranks) ofA(i)

andB(i), respectively.a and b are the means ofai andbi, i = 1, 2, ..., V .§ The variance ofλG = 1

N−1

∑N−1g=1 (λg)

2 − ( 1N−1

∑N−1g=1 λg)

2.

Algorithm 2 Pilot Placement Based on ConcatenatedCDS with an Iterative Tail SearchInput: An existing (u, v, a) CDS C for concatenation,

the number of total subcarriersN , the number ofpilot subcarriersP , the partial DFT matrixD ofwhich the(m,n) element is 1√

Ne−j2πmn/N , where

0 ≤ m ≤ N − 1, 0 ≤ n ≤ L − 1, andL is thenumber of taps of the CIR;

1: Initialize Ω0c = ∅, Ωtemp = ∅

2: for i from 1 to ⌊Nv ⌋ do

3: Ωic = Ωi−1

c

[C+ (i − 1)× v]4: end for5: Pr = P − u× ⌊N

v ⌋, Ω = Ωc

6: for j from 1 to Pr do7: Ωtemp = Ω

8: Form allPr − j +1 possible subsets of sizej byadding an element toΩtemp:Ω = Ωtemp

k ∈ Pr + 1, Pr +2, ..., N\Ωtemp

9: Form the matrixA by selecting rows ofD foreachj-element sets generated from the previousstep, and the indices set of the selected rows isΩ

10: For all (Pr− j+1) of A matrices generated fromthe previous step, calculate the correspondingmutual coherence, and choose the set with theminimum mutual coherence

11: end for12: return Ω

Output: The pilot indices setΩ

low-density parity-check (LDPC) code. We consider thechannel model described in (6) withNp = 15 multipaths,in which the inter-arrival times are exponentially dis-tributed with a mean of1 ms, i.e.,E[τj+1 − τj ] = 1 ms,j ∈ 0, 1, ..., Np− 1. The amplitudes are Rayleigh dis-tributed with the average power decreasing exponentiallywith delay, where the difference between the beginning

TABLE IIIPARAMETERS OF COMPARED ALGORITHMS.

Name MaxIter‡ Sparsity K Step size s Toleranceǫ Threshold Γ

OMP 20 15 not required not required not requiredCoSaMP 20 15 not required norm(Noise)§ not requiredSAMP not required not required 1, 6, 8 norm(Noise) not requiredAS-SAMP not required not required initially 1, 6, 8 norm(Noise) 1‡ Maximum iterations.§ norm(V)=

√∑ |V|2.

and the end of the CP is20 dB. These parameters areassumed to be constant within an OFDM symbol. Theparameters for the considered reconstruction algorithmsare given in Table III. Since there is a trade-off betweenthe initial step sizes and the reconstruction speed,three choices of thes value (s ≤ Np) are used thatcorrespond to a small, medium and large step size. Also,as the thresholdΓ depends on the distribution of themagnitude of the target signal, we set it equal to1 basedon empirical results. The MSE and BER are used tomeasure the channel estimation accuracy and the systemperformance, respectively. The CPU running time is usedto provide a rough estimation of the channel estimationcomputational complexity. Simulations are performed inMATLAB R2009a using a2.67 GHz Intel Corei7 CPUwith 4 GB of memory storage and104 Monte-Carlo trialsare employed for averaging the results. Performanceof the proposed reconstruction algorithm and the pilotplacement scheme are shown next.

B. Performance of the Proposed AS-SAMP Algorithm

First, we compare the proposed algorithm with twoclassic algorithms, namely least square (LS) and OMPusing different numbers of randomly distributed pilots.Fig. 2 shows the MSE of these algorithms versus SNR.As the number of pilots increases, MSE decreases for allalgorithms. It is worth noting that OMP has, in general,a better MSE performance than LS for the same numberof pilots. Similarly, AS-SAMP achieves a better MSEperformance than the OMP algorithm. For example, atSNR = 15 dB andP = 64, the MSE for LS, OMPand AS-SAMP algorithms are8 × 10−2, 2 × 10−2 and

September 10, 2014 DRAFT

Page 37 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 39: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

8 IEEE TRANSACTIONS ON COMMUNICATIONS

0 2 4 6 8 10 12 14 16 1810

−4

10−3

10−2

10−1

100

SNR (dB)

MSE

LS (P =64)

LS (P =128)

LS (P =256)

OMP (P =64)

OMP (P =128)

OMP (P =256)

AS-SAMP (P =64)

AS-SAMP (P =128)

AS-SAMP (P = 256)

Fig. 2. MSE performance of the LS, OMP and AS-SAMP algorithmswith various number of pilots.

0 2 4 6 8 10 12 14 16 1810

−4

10−3

10−2

10−1

100

SNR (dB)

MSE

LS

OMP

CoSaMP

SAMP (s = 6)

AS−SAMP (sI = 6)

Fig. 3. MSE performance of the LS, OMP, CoSaMP, SAMP, andAS-SAMP algorithms.

6×10−3, respectively. In other words, for the same levelof MSE performance, the proposed algorithm uses fewerpilots than the other two algorithms.

Next, with a fixed number of pilots, Figs. 3, 4 and 5plot the MSE, BER and the computational complexityfor all the algorithms, respectively. According to resultsin Figs. 3 and 4, the CS-based channel estimators givebetter MSE and BER performance than the conventionalLS estimator. In other words, the channel estimatorsbased on the SAMP and AS-SAMP algorithms outper-form those based on the OMP and CoSaMP algorithmsin the sense that the former algorithms offer the sameperformance while using less number of pilots. As shownin Fig. 5, the complexity of the OMP algorithm ishigher than that of other algorithms; this can be easilyexplained, as only one coordinate is added during each

0 2 4 6 8 10 12 14 16 18

10−6

10−5

10−4

10−3

10−2

10−1

100

SNR (dB)

BER

Coded LS

Coded OMP

Coded CoSaMP

Coded SAMP (s = 6)

Coded AS−SAMP (sI = 6)

Fig. 4. BER performance of the LS, OMP, CoSaMP, SAMP, andAS-SAMP algorithms.

0 2 4 6 8 10 12 14 16 180

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0.1

SNR (dB)

Runningtime(sec)

OMP

CoSaMP

SAMP (s =6)

AS−SAMP (sI =6)

Fig. 5. Running time of the OMP, CoSaMP, SAMP, and AS-SAMPalgorithms.

iteration. Similarly, the complexity of the AS-SAMPalgorithm is higher than that of the SAMP which canbe explained, as the step size is reduced during the finetuning stages given the same initial step size in AS-SAMP.

Figs. 6 and 7 depict the MSE performance andcomputational complexity of the AS-SAMP and SAMPalgorithms with different step sizes, respectively. Ascan be seen, for a medium or large initial step size(s = sI = 6 or s = sI = 8), AS-SAMP outperformsSAMP with a small increase in complexity, while fors = sI = 1, the same performance is achieved using aslightly larger CPU running time for AS-SAMP. Thiscan be easily explained, as whens = sI = 1, AS-SAMP becomes equivalent to SAMP except an addi-tional criteria for changing stages. Note that SAMP and

DRAFT September 10, 2014

Page 38 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 40: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 9

0 2 4 6 8 10 12 14 16 1810

−4

10−3

10−2

10−1

SNR (dB)

MSE

SAMP (s = 8)

SAMP (s = 6)

SAMP (s = 1)

AS−SAMP (sI = 8)

AS−SAMP (sI =6)

AS−SAMP (sI = 1)

Fig. 6. MSE performance of the SAMP and AS-SAMP algorithmswith different step sizes.

0 2 4 6 8 10 12 14 16 180.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18

SNR (dB)

Runningtime(sec)

SAMP (s =1)

SAMP (s =6)

SAMP (s=8)

ASSAMP (sI=1)

AS−SAMP (sI=6)

AS−SAMP (sI =8)

Fig. 7. Running time of the SAMP and AS-SAMP algorithms withdifferent step sizes.

AS-SAMP requires ≤ Np andsI ≤ Np, respectively, toavoid overestimation. In general, the proposed algorithmis more accurate without significantly increasing thecomplexity of the estimation.

C. Performance of the Proposed Pilot PlacementScheme

Here, we consider three pilot placement schemes, i.e.,random, the procedure in [32], and our proposed scheme.Equal-powered pilots are assumed for all scenarios. Withthe random scheme, the pilots are selected randomlyamong all subcarriers and104 trials are generated foraveraging the results. With our proposed scheme, thepilots are arranged based on the concatenated CDS withan iterative tail search, referred to as C-CDS with TS.Because the selection of the existing CDS depends on

0 2 4 6 8 10 12 14 16 1810

−4

10−3

10−2

10−1

100

SNR (dB)

MSE

Random, µ = 0.31

C-CDS with TS, µ = 0.27

Random, µ = 0.31

C-CDS with TS, µ = 0.27

OMP

AS-SAMP

Fig. 8. MSE performance of the OMP and AS-SAMP algorithmswith random and the proposed pilot placement, forP = 64.

0 2 4 6 8 10 12 14 16 1810

−4

10−3

10−2

10−1

100

SNR (dB)

MSE

Random, P = 128

Procedure in [32], P = 128

C-CDS with TS, P = 128

Random, P = 256

Procedure in [32], P = 256

C-CDS with TS, P = 256

OMP

AS-SAMP

Fig. 9. MSE performance of the OMP and AS-SAMP algorithms fordifferent pilot placements, forP = 128, 256. Solid lines are used forOMP and dashed lines for AS-SAMP.

the ratioP/N , the (273,17,1), (73,9,1) and (133,33,8)CDS are chosen for the cases ofP = 64, P = 128 andP = 256, respectively.

Fig. 8 shows the MSE performance of the OMP andAS-SAMP algorithms with the random and the proposedpilot placement scheme, givenP = 64. For the randomlyplaced pilots, we use error bars to indicate the standarddeviations of the MSE; this was calculated based on104

indices sets. It can be seen that the proposed method pro-vides a superior channel estimation performance whencompared to the random placements, as a reduced mutualcoherenceµ is obtained. For instance, at SNR = 16 dB,the average MSEs of the random scheme for OMP andAS-SAMP are1.6 × 10−2 and3× 10−3 , respectively,while the MSEs of the proposed scheme for OMP and

September 10, 2014 DRAFT

Page 39 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 41: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

10 IEEE TRANSACTIONS ON COMMUNICATIONS

0 2 4 6 8 10 12 14 16 1810

−8

10−7

10−6

10−5

10−4

10−3

10−2

10−1

100

SNR (dB)

BER

RandomProcedure in [32]

C-CDS with TS

AS-SAMP

OMP

Fig. 10. BER performance of the OMP and AS-SAMP algorithmsfor different pilot placements, forP = 256. Solid lines are used forOMP and dashed lines for AS-SAMP.

AS-SAMP are1.2× 10−2 and2× 10−3, respectively. Itshould be noted that for each SNR, the MSE with theproposed method is smaller than the mean of the MSEsminus the standard deviation with the randomly placedpilots. More specifically, it is equal to approximately themean minus three times the standard deviation; as such,the proposed method provides a better MSE performancethan nearly all the random pilot arrangements. Also,AS-SAMP achieves a better MSE performance whencompared with OMP for both pilot placement schemes.

Fig. 9 shows the MSE of the OMP and AS-SAMPalgorithms for the three previously mentioned pilotplacement schemes forP = 128, 256. Among them,the proposed C-CDS with TS has a slightly betterperformance. Moreover, since the C-CDS pilot arrange-ment is deterministic, and the iterative search is onlyconducted for the tail, the searching space is greatlyreduced. Therefore, the number of iterations of theproposed method, which is proportional to its com-putational complexity, is significantly lower than thatof the procedure in [32]. An example is provided asfollows. WhenN = 1024 and P = 256, the proce-dure in [32] requires(2N − P + 1)P/2 = 229, 504iterations. For our proposed scheme, assuming that a(133,33,8) CDS is used for concatenation, there are1024 − 133 × ⌊ 1024

133 ⌋ = 93 subcarriers at the tail. Tosearch the rest of25 (256 − 33 × ⌊ 1024

133 ⌋) pilot indiceswhich minimizesµ, (2 × 93 − 25 + 1) × 25/2 = 2025iterations are required. It is also worth noting that AS-SAMP provides a comparable or even a reduced MSEusing fewer pilots than OMP for a given SNR. Forexample, at SNR = 16 dB, the MSE of AS-SAMP forP = 128 is lower than that of OMP forP = 256 for allconsidered pilot placement schemes.

0 2 4 6 8 10 12 14 16 1810

−7

10−6

10−5

10−4

10−3

10−2

10−1

100

SNR (dB)

BER

Random

C-CDS with TS

SAMP

OMP

CoSaMP

AS-SAMP

Fig. 11. BER performance of the OMP, CoSaMP, SAMP, and AS-SAMP algorithms for random and the proposed pilot placement, forP = 256. Solid lines are used for OMP, dot lines for CoSaMP, dashed-dot lines for SAMP, and dashed lines for AS-SAMP.

Finally, we compare the BER performance for dif-ferent pilot placement schemes, with results shown inFigs. 10 and 11. In Fig. 10, the OMP and AS-SAMPalgorithms are considered. Clearly, AS-SAMP providesa better BER performance, and the proposed C-CDS withTS is slightly better than the other pilot schemes; this isconsistent with the MSE comparison in Fig. 9. Fig. 11compares the BER performance of the OMP, CoSaMP,SAMP and AS-SAMP algorithms with random and theproposed pilot arrangements. In general, AS-SAMP withthe proposed pilot allocation scheme provides the bestBER performance among all the estimation algorithmswith the considered pilot placement schemes.

VI. CONCLUSION

In this paper, we have proposed an adaptive stepsize SAMP algorithm, AS-SAMP, with an efficient near-optimal pilot placement scheme for sparse channel esti-mation in OFDM systems. The proposed reconstructionalgorithm features an adaptive step size adjustment strat-egy and possesses the advantage of not requiringa prioriknowledge of the sparsity of the channel. It is shownthrough performance analysis that the proposed algo-rithm can significantly improve the estimation accuracywithout introducing significant additional complexity. Inorder to ensure a satisfactory estimation, we have furtherproposed a near-optimal pilot placement scheme, whichis based on the concatenated CDS with an iterative tailsearch. Because the searching space of the proposedmethod is significantly reduced, its complexity is muchlower than the iterative procedures in the literature.Monte Carlo simulations show that the proposed AS-SAMP with the new pilot placement scheme provides

DRAFT September 10, 2014

Page 40 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 42: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 11

a better MSE performance for the channel estimate, aswell as the system BER, without significantly increasingthe computational complexity, and thus offers a bettertradeoff between complexity and performance.

APPENDIX ARECONSTRUCTIONPERFORMANCE OFAS-SAMP

The recovery performance of the proposed algorithm,AS-SAMP, is based on the theoretical performance ofSAMP and subspace pursuit (SP) [39]; therefore, theproofs which follow the format in [27], [39] are de-veloped for two cases: exact recovery from noiselessmeasurements and approximate recovery from noisymeasurements. Before statingTheorem 1for the exactrecovery of the AS-SAMP algorithm, we need tworesults summarized in the lemmas below.

Lemma 1:Given an arbitraryK-sparse signalh andthe corresponding measurementYp = Ah. Let thetotal number of stages decided by AS-SAMP beJ andsi, i ∈ 1, 2, ..., J be the step size of theith stage. IfA satisfies the RIP with parameterδ3KJ

< 0.06 [17],whereKJ =

∑Ji=1 si is the estimated sparsity level, the

last stage of AS-SAMP is equivalent to SAMP algorithmwith estimated sparsityKJ , except possibly differentcontents in the final support set and the observationresidual vector.

Proof: During the last stage of AS-SAMP, the finalsupport set has sizeKJ . Given the same size of the finalsupport set, both algorithms use the same preliminaryand final correlation test, which returns theKJ indicescorresponding to the largest absolute values of|A†

CtYp|.The only differences are in the content of the finalsupport set and the observation residual vector.

Lemma 2:AS-SAMP guarantees the convergence ofthe recovery process. The upper-bounded number ofiterations that AS-SAMP involves is

− log(|hmin|‖ h ‖ )(

−1

log(CK1)+ ...+

−1

log(CKJ)) + J, (12)

wherehmin is the non-zero element with the minimummagnitude andCKi

=2δ3Ki

(1+δ3Ki)

(1−δ3Ki)3 , i = 1, 2, ..., J ,

δ3Kiis the RIP parameter in theith stage, andKi is the

size of final support set in theith stage.Proof: Lemma 2.1is introduced which serves as a

foundation for the proof ofLemma 2.Lemma 2.1:The energy difference between the signal

captured by the final support set from the current itera-tion and the final support set from the previous iteration,i.e., ‖ hFt ‖2 − ‖ hFt−1 ‖2, decreases as the number ofiterations increases before the estimated sparsity reachesthe true sparsity.Proof ofLemma 2.1is postponed to Appendix B. Similarto SAMP, AS-SAMP takes a finite number of iterationsto approach the sparse estimation. If the algorithm fallsinto an infinite loop of a certain stage, the final supportset will repeat and this is in contradiction to the fact that

the energy difference decreases monotonically. Intuitive-ly, AS-SAMP reaches the final estimation with the sameestimated sparsity level faster than SAMP because themost significant entries are reconstructed by selecting alarger number of coordinates into the support set duringthe initial stages. Let the number of iterations requiredin the ith stage using the proposed algorithm benit

i ,i = 1, 2, ..., J . According to Theorem 6in [39], foreach iteration in a particular stage both SAMP and AS-SAMP contain twocorrelation maximization testsandthe property below holds.

nit1 ≤

− log( |hmin|‖h‖ )

− log(CK1)+ 1, ...

nitJ ≤

− log( |hmin|‖h‖ )

− log(CKJ)+ 1.

(13)

Let the total number of iterations required bentotal, thenntotal = nit

1 + ...+ nitJ ,

≤ − log(|hmin|‖ h ‖ )(

−1

log(CK1)+ ...+

−1

log(CKJ)) + J.

(14)

Moreover, the upper-bounded number of iterations forAS-SAMP is compared with that for AS-SAMP in theCorollary below.

Corollary 1: Provided thatA satisfies the RIP withparameterδ3Ks−ASSAMP

< 0.06 and δ3Ks−SAMP<

0.06, whereKs−ASSAMP and Ks−SAMP are the es-timated sparsity level for AS-SAMP and SAMP, respec-tively, the upper-bounded number of iterations for AS-SAMP is smaller than that for SAMP.The proof of theCorollary 1 is deferred to in AppendixB. Now based on the lemmas above, a sufficient condi-tion for exact reconstruction is drawn in the followingtheorem.

Theorem 1:(Exact recovery from noiseless measure-ments): Let Ks−ASSAMP = sIJ , wheresI = s1 is theinitial step size andJ is the total number of stages of theAS-SAMP algorithm. If the sensing matrixA satisfiesthe RIP with the parameterδ3Ks−ASSAMP

< 0.06, theAS-SAMP algorithm is guaranteed to exactly recoverh

from Yp via a finite number of iterations.Proof: Based onLemma 1andLemma 2, when the

RIP condition is satisfied, because the last stage is equiv-alent to SAMP with estimated sparsityKs−ASSAMP ,the proposed algorithm guarantees exact recovery thetarget signal after this stage, and it takes finite numberof iterations to reachKs−ASSAMP .Remark: From theLemma 1, a sufficient condition whichis required forA to guarantee an exact recovery isδKJ

< 0.06, where KJ =∑J

i=1 si and si is thestep size in theith stage. AssI ≥ s2 ≥ · · · ≥sJ , we haveKJ ≤ sIJ . Therefore, a more restric-tive requirement of the RIP parameter ofA will be

September 10, 2014 DRAFT

Page 41 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 43: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

12 IEEE TRANSACTIONS ON COMMUNICATIONS

δ3sIJ < 0.06 which is δ3Ks−ASSAMP< 0.06. The

sufficient condition for SAMP is more restrictive than SPalgorithm as the estimated sparsity levelKs−SAMP =s⌈K/s⌉ where s is the fixed step size in SAMP, isalways larger than the true sparsityK [27]. Similar-ly, to compare the restrictiveness of the condition ofthe AS-SAMP, the values ofKs−SAMP , Ks−ASSAMP

and K needs to be compared. As⌈K/J⌉ ≤ sI ≤s⌈K/s⌉

J , so K ≤ sIJ ≤ s⌈K/s⌉ and thusK ≤Ks−ASSAMP ≤ Ks−SAMP . Furthermore, because ofmonotonicity of δ3K , δ3Ks−ASSAMP

≤ δ3Ks−SAMP, as

a result, ifδ3Ks−SAMP< 0.06, δ3Ks−ASSAMP

< 0.06 ishold, which means that the requirement ofA for AS-SAMP is less restrictive than that for SAMP. Moreover,asA is aP ×N partial DFT matrix in our application,and indices of theP pilots are randomly chosen,A satis-fies the RIP with an overwhelming probability providedthat

K ≤ C1P

(logN)6, (15)

where C1 depends only on the RIP parameter (byoverwhelming probability, it means that the probabilityis at least1−N

− 1

C1 ) [16] andK is the sparsity of thetarget CIR. In fact, (15) expresses the minimum numberof pilots (P ≥ K(logN)6

C1

) required such that a randomsubset ofA with average cardinality3Ks−ASSAMP

satisfies the RIP with high probability. Specifically, forP ≥ 8K, the recovery rate is above 90% [16].

The second part is to investigate the approximaterecovery from inaccurate measurements of the proposedalgorithm. Two types of inaccurate measurements areconsidered: one is subject to noise perturbation and theother one is subject to approximately sparse signal whosenon-significant elements are comparatively small (but notzero) and noise.

Theorem 2:(Approximate recovery from noisy mea-surements): Consider h ∈ R

N as a K-sparse sig-nal, Yp = Ah + Wp ∈ R

P as the noisy mea-surement vectors andWp as a noise vector gen-erated from a Gaussian distribution with zero meanand varianceσ2. If the measurement matrixA sat-isfies the RIP with parameterδ3Ks−ASSAMP

< 0.03,the signal approximationh satisfies:

‖ h− h ‖ ≤1 + δ3Ks−ASSAMP

δ3Ks−ASSAMP(1− δ3Ks−ASSAMP

)‖ Wp ‖

=1 + δ3Ks−ASSAMP

δ3Ks−ASSAMP(1− δ3Ks−ASSAMP

(16)Corollary 2: (Approximate recovery from signal and

noise perturbations): Consider h ∈ RN as a com-

pressibleK-sparse signal. LethK represent theKmost significant entries. The signalh is compressiblysparse ifh − hK 6= 0. With the same assumptionof Theorem 2, if A satisfies the RIP with parameterδ6Ks−ASSAMP

< 0.03, the reconstruction distortion ofthe AS-SAMP algorithm is written in Eq. (17).

The proofs of theTheorem 2and Corollary 2 aresimilar to the corresponding theorem and corollary in[39] as the AS-SAMP is equivalent to SAMP with theestimated sparsityKs−ASSAMP at the last stage exceptfor the different contents of candidate and final supportset which does not affect stability of AS-SAMP underboth signal and noise perturbations.

APPENDIX BPROOF OFLEMMA 2.1 AND COROLLARY 1

A. Proof of Lemma 2.1

The proof is derived fromTheorem 2in [39] becauseboth the preliminary and final test are correlation maxi-mization tests.

Proof: Provided that the sensing matrixA satisfiesthe RIP with parameterδ3KJ

< 0.06.‖ h

Ft ‖2 ≤ C

2Ki

‖ hF

t−1 ‖2,

≤ C2KJ

‖ hF

t−1 ‖2= ζ ‖ hF

t−1 ‖2,(18)

where hF

t is the reconstructed signal not captured byFt after thetth iteration. (18) is based on0 < CK1

≤CK2

≤ ... ≤ CKJ< 1, and thereforeζ = C2

KJ< 1.

Thus, the following derivation holds,‖ h

F1 ‖2 −ζ ‖ h ‖2≤ 0 ≤‖ h

F2 ‖2,

0 ≤‖ hF

1 ‖2 − ‖ hF

2 ‖2≤ ζ ‖ h ‖2,(19)

whereh = hF

0 . As ‖ hF

1 ‖2=‖ h ‖2 − ‖ hF1 ‖2 and‖ h

F2 ‖2=‖ h ‖2 − ‖ hF2 ‖2, (19) can be written as:

0 ≤ (‖ h ‖2 − ‖ hF1 ‖2)− (‖ h ‖2 − ‖ hF2 ‖2) ≤ ζ ‖ h ‖2,

0 ≤‖ hF2 ‖2− ‖ hF1 ‖2≤ ζ ‖ h ‖2 .(20)

Similarly, we have0 ≤‖ hF3 ‖2 − ‖ hF2 ‖2 ≤ ζ2 ‖ h ‖2,0 ≤‖ hF4 ‖2 − ‖ hF3 ‖2 ≤ ζ3 ‖ h ‖2, · · ·

0 ≤‖ hFt ‖2 − ‖ hFt−1 ‖2 ≤ ζt−1 ‖ h ‖2 .

(21)

As 1 > ζ > ζ2 > ζ3 > ζ4 · · · , the energy differencebetween two consecutive iterations converges to a smallpositive value which is related to the RIP parameter.

B. Proof of Corollary 1Proof: Since both the SAMP and AS-SAMP algo-

rithms use thepreliminary and final tests, the upper-bounded number of iterations inLemma 2can also beapplied to the SAMP algorithm. Consider the same targetsignal for both algorithms and according toLemma 2wehave

ntotal ≤ − log(

|hmin|

‖ h ‖)(

−1

log(CK1)+ ...+

−1

log(CKJ)) + J.

As 0 < δ3K1≤ δ3K2

... ≤ δ3KJ< 0.06, then

0 < −1log(CK1

) ≤ −1log(CK2

) ≤ ... ≤ −1log(CKJ

) . Thus we

have −1log(CK1

) +−1

log(CK2) + ... + −1

log(CKJ) ≤ −J

log(CKJ) ,

and thereforentotal ≤ −J log(|hmin|

‖h‖)

− log(CKJ) + J . With the

same target signal and the total number of stages,the upper bound only depends onCKJ

. According to

DRAFT September 10, 2014

Page 42 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 44: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

SUBMITTED PAPER 13

‖ h− h ‖≤ 1 + δ6Ks−ASSAMP

δ6Ks−ASSAMP(1− δ6Ks−ASSAMP

)(σ +

1 + δ6Ks−ASSAMP

K|h− hK |) (17)

Appendix A, 0 < δ3Ks−ASSAMP≤ δ3Ks−SAMP

<0.06, and therefore,CKs−ASSAMP

≤ CKs−SAMP.

Clearly, 0 < −1log(CKs−ASSAMP

) ≤ −1log(CKs−SAMP

) and

−J log(|hmin|

‖h‖)

− log(CKs−ASSAMP) + J ≤ −J log(

|hmin|

‖h‖)

− log(CKs−SAMP) + J .

For example, givenδ3Ks−ASSAMP= 0.01 which leads

to log(CKs−ASSAMP) = −5.59 and the total number of

stages is5. Supposelog( |hmin|‖h‖ ) = −7, the upper bound

of the number of iterations that the proposed algorithminvolves is11. On the other hand, with the sameh andJ , supposeδ3Ks−SAMP

= 0.05, the upper bound of thenumber of iterations for the SAMP algorithm is16.

APPENDIX CPROOF OFTHEOREM 3

Theorem 3:Assume that the measurement matrixA

is composed byP rows ofD where the (m,n) elementof D is given by [D]m,n = 1√

Ne−j2πmn/N (0 ≤ m ≤

N − 1 and 0 ≤ n ≤ L − 1), and the indices set ofthe selected rows isΩ. All the pilot symbols are equal-powered to beEP = |X(kc)|2, c = 1, 2, ...P . If Ω

is a CDS with parameters (P,N, λ), then the mutualcoherence of the resulted measurement matrix,µ(A), isminimized. Particularly,λ = P 2−P

N−1 and the minimized

value of the mutual coherence isEP

NP−P 2

N−1 .This is a refined version of the proof presented in[32] and [33], so that the proof is applicable in thegeneral case.

Proof: Recall in (10), because| < dpi · dpj > |only depends on∆ = i − j and‖ dpi ‖=‖ dpj ‖= 1,designing the optimal pilot pattern can be formulated as:

Ωopt = argminΩ

max1≤∆≤L−1

EP | < dpi · dpi+∆ > |

= argmin max1≤∆≤L−1

EP |P∑

r=1

ωpr ·∆|.

(22)

where ω = e−j 2πN . To maximize |∑P

r=1 ωpr ·∆| is e-

quivalent to maximize|∑Pr=1 ω

pr ·∆|2 =∑P

m=1 ωpm·∆ ·

∑Pn=1 ω

−pn·∆, andEP is a constant, therefore (22) canbe re-written as:

Ωopt = argminΩ

max1≤∆≤L−1

P∑

m=1

P∑

n=1

ω(pm−pn)∆. (23)

It is worth noting that∑P

m=1

∑Pn=1 ω

(pm−pn)∆ in

above equation is a complex number generally. Thus,to make it applicable in the general case, a revision is

made as follows:

Ωopt = argminΩ

max1≤∆≤L−1

P︸︷︷︸

m=n

+

P∑

m=1

P∑

n=1

Re[ω|pm−pn|∆]

︸ ︷︷ ︸

m6=n

= argminΩ

max1≤∆≤L−1

P︸︷︷︸

m=n

+

P∑

m=1

P∑

n=1

cos(2π

N|pm − pn|∆)

︸ ︷︷ ︸

m6=n

.

(24)In above equation, the mutual coherence of the re-sulted matrix depends not only on the space betweentwo columns but also on the space between two pi-lots. Define a setG = (pm − pn) mod N |1 ≤m,n ≤ P,m 6= n which containsN different numbersg = 1, 2, ...N − 1 and each number repeatsλg times.(24) can be re-written as:

Ωopt = argminΩ

max1≤∆≤L−1

EP ( P︸︷︷︸

m=n

+N−1∑

g=1

λg cos(2π

Ng∆)

︸ ︷︷ ︸

m6=n

).

(25)Problem in above equation is equivalent to the problemof finding the optimal pilot patter which minimizes themaximum value of

∑N−1g=1 λg cos(

2πN g∆), 1 ≤ ∆ ≤ L−

1. To show that the patterns based on CDS are optimal,λ1 = λ2 = ... = λN−1 needs to be satisfied. Because

max1≤∆≤L−1

N−1∑

g=1

λg cos(2π

Ng∆) ≥

∑L−1

∆=1(∑N−1

g=1λg cos(

Ng∆))

L− 1,

(26)equality happens whenλ1 = λ2 = ... = λN−1 = P 2−P

N−1 ,and the minimum value of the mutual coherence is

µ(A)min = EP

P +P 2 − P

N − 1

∑L−1

∆=1(∑N−1

g=1cos( 2π

Ng∆))

L− 1,

= EP

PN − P 2

N − 1.

(27)

REFERENCES

[1] L. Hanzo, Y. Akhtman, L. Wang, and M. Jiang, MIMO-OFDMfor LTE,WIFI andWIMAX : Coherent versus non-Coherent andCooperative Turbo-Transceivers. John Wiley and IEEE Press,Dec. 2011.

[2] M. Sablatash, “Transmission of all-digital television: state of theart and future directions,”IEEE Trans. Broadcast., vol. 40, no. 2,pp. 102–121, Jun. 1994.

[3] B. Li, S. Zhou, M. Stojanovic, L. Freitag, and P. Willett,“Multicarrier communication over underwater acoustic channelswith nonuniform doppler shifts,”IEEE J. Oceanic Eng., vol. 33,pp. 198–209, Apr. 2008.

[4] S. Zhou and Z. Wang, OFDMfor Underwater Acoustic Com-munications. John Wiley & Sons, Jun. 2014.

September 10, 2014 DRAFT

Page 43 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 45: For Peer Revielicheng/TCOM-review.pdf · 2014. 9. 29. · Sparse channel estimation, compressed sensing/compressive sensing, sparsity adaptive matching pursuit, pilot placement, cyclic

For Peer Review

14 IEEE TRANSACTIONS ON COMMUNICATIONS

[5] H. Arslan and G. Bottomley, “Channel estimation in narrowbandwireless communication systems,”Wireless Commun. and MobileComput., vol. 1, no. 2, pp. 201–219, Apr. 2001.

[6] S. Coleri, M. Ergen, A. Puri, and A. Bahai, “Channel estimationtechniques based on pilot arrangement in OFDM systems,”IEEETrans. Broadcast., vol. 48, no. 3, pp. 223–229, Sep. 2002.

[7] J. van de Beek, O. Edfors, M. Sandell, S. Wilson, and P. Borjes-son, “On channel estimation in OFDM systems,” inProc. IEEE45th Veh. Technol. Conf., Chicago, IL, Jul. 1995, pp. 815–819.

[8] W. Bajwa, J. Haupt, A. Sayeed, and R. Nowak, “Compressedchannel sensing: A new approach to estimating sparse multipathchannels,”Proc. IEEE, vol. 98, no. 6, pp. 1058–1076, Jun. 2010.

[9] I. Fevrier, S. Gelfand, and M. Fitz, “Reduced complexitydecisionfeedback equalization for multipath channels with large delayspreads,”IEEE Trans. Commun., vol. 47, no. 6, pp. 927–937,Jun. 1999.

[10] S. Cotter and B. Rao, “Sparse channel estimation via matchingpursuit with application to equalization,”IEEE Trans. Commun.,vol. 50, no. 3, pp. 374–377, Mar. 2002.

[11] M. Stajanovic, “Retrofocusing techniques for high rate acousticcommunications,”J. Acoust. Soc. Am., vol. 117, no. 3, pp. 1173–1185, Mar. 2005.

[12] S. Ariyavisitakul, N. Sollenberger, and L. Greenstein, “Tap-selectable decision-feedback equalization,”IEEE Trans. Com-mun., vol. 45, no. 12, pp. 1497–1500, Dec. 1997.

[13] G. Gui, W. Peng, and F. Adachi, “Improved adaptive sparsechannel estimation based on the least mean square algorithm,”in Proc. IEEE on Wireless Commun. and Networking Conf. 2013(WCNC’13), Shanghai, China, Apr. 2013, pp. 3105–3109.

[14] C. Carbonelli, S. Vedantam, and U. Mitra, “Sparse channelestimation with zero tap detection,”IEEE Trans. Commun., vol. 6,no. 5, pp. 1743–1763, May 2007.

[15] C. Berger, S. Zhou, J. Preisig, and P. Willett, “Sparse channelestimation for multicarrier underwater acoustic communication:From subspace methods to compressed sensing,”IEEE Trans.Signal Process., vol. 58, no. 3, pp. 1708–1721, Mar. 2010.

[16] E. Candes, J. Romberg, and T. Tao, “Robust uncertainty princi-ples: Exact signal reconstruction from highly incomplete frequen-cy information,” IEEE Trans. Inf. Theory, vol. 52, pp. 489–509,Feb. 2006.

[17] E. Candes and T. Tao, “Near-optimal signal recovery fromrandom projections: Universal encoding strategies,”IEEE Trans.Inf. Theory, vol. 52, pp. 5406–5425, Dec. 2006.

[18] D. Donoho, “Compressed sensing,”IEEE Trans. Inf. Theory,vol. 52, no. 4, pp. 1289–1306, Apr. 2006.

[19] R. Baraniuk, “Compresseive sensing,”IEEE Signal Process.Mag., vol. 24, no. 4, pp. 118–121, Jul. 2007.

[20] “Compressive sensing resources,” Huston, TX,http://www.dsp.ece.rice.edu/cs/.

[21] G. Taubock, F. Hlawatsch, D. Eiwen, and H. Rauhut, “Com-pressive estimation of doubly selective channels in multicarriersystems: Leakage effects and sparsity-enhancing processing,”IEEE J. Sel. Topics Signal Process., vol. 4, no. 2, pp. 255–271,Apr. 2010.

[22] C. Berger, Z. Wang, J. Huang, and S. Zhou, “Application of com-pressive sensing to sparse channel estimation,”IEEE Commun.Mag., vol. 48, no. 11, pp. 164–174, Nov. 2010.

[23] H. Die, W. Xiaodong, and H. Lianghua, “A new sparse channelestimation and tracking method for time-varying OFDM system-s,” IEEE Trans. Veh. Technol., vol. 62, no. 9, pp. 4648–4653, Nov.2013.

[24] R. Prasad, C. R. Murthy, and B. D. Rao, “Joint approximatelysparse channel estimation and data detection in OFDM systemsusing sparse bayesian learning,”IEEE Trans. Signal. Process.,vol. 62, no. 14, pp. 3591–3603, Jul. 2014.

[25] J. Tropp and A. Gilbert, “Signal recovery from random measure-ments via orthogonal matching pursuit,”IEEE Trans. Inf. Theory,vol. 53, no. 12, pp. 4655–4666, Dec. 2007.

[26] D. Needell and J. Tropp, “CoSaMP: Iterative signal recoveryfrom incomplete and inaccurate samples,”Commun. ACM: Re-search Highlights section, vol. 53, no. 12, pp. 93–100, Dec. 2010.

[27] T. Do, N. N. G. Lu, and T. Tran, “Sparsity adaptive matchingpursuit algorithm for practical compressed sensing,” inProc.

the 42nd Asilomar Conf. on Signals, Syst. and Comput., PacificGrove, CA, Oct. 2008, pp. 581–587.

[28] X. Bia, X. Chen, and Y. Zhang, “Variable step size stagewiseadaptive matching pursuit algorithm for image compressed sens-ing,” in Proc. IEEE Int. Conf. on Signal Process., Commun. andComput. (ICSPCC), Kunming, China, Aug. 2013, pp. 1–4.

[29] X. He, R. Song, and W. Zhu, “Optimal pilot pattern designforcompressed sensing-based sparse channel estimation in OFDMsystems,”J. Circuits, Syst., and Signal Process., vol. 31, no. 4,pp. 1379–1395, Aug. 2012.

[30] L. Applebaum, W. Bajwa, A. Calderbank, J. Haupt, andR. Nowak, “Deterministic pilot sequences for sparse channelestimation in OFDM systems,” inProc. the 17th Int. Conf. onDigital Signal Process. (DSP), Corfu, Greece, Jul. 2011, pp. 1–7.

[31] C. Qi and L. Wu, “Optimized pilot placement for sparse channelestimation in OFDM systems,”IEEE Signal Process. Lett.,vol. 18, no. 12, pp. 749–752, Dec. 2011.

[32] P. Pakrooh, A. Amini, and F. Marvasti, “OFDM pilot allocationfor sprae channel estimation,”EURASIP J. Adv. Signal Process.,vol. 2012, no. 59, pp. 1–9, Mar. 2012.

[33] C. Qi and L. Wu, “A study of deterministic pilot allocation forsparse channel estimation in OFDM systems,”IEEE Commun.Lett., vol. 16, no. 5, pp. 742–744, May 2012.

[34] M. Gay, A. Lampe, and M. Breiling, “Sparse OFDM channelestimation based on regular pilot grids,” inProc. 9th Int. Conf.on Syst., Commun. and Coding (SCC), Munich, Germany, Jan.2013, pp. 1–6.

[35] J. Chen, C. Wen, and P. Ting, “An effcient pilot design scheme forsparse channel estimation in OFDM systems,”IEEE Commun.Lett., vol. 17, no. 7, pp. 1352–1355, Jul. 2013.

[36] C. Qi, G. Yue, L. Wu, Y. Huang, and A. Nallanathan, “Pilotde-sign schemes for sparse channel estimation in OFDM systems,”IEEE Trans. Veh. Technol., no. 99, pp. 1–13, Jun. 2014.

[37] C. J. Colbourn and J. H. Dinitz, “Other combinatorial designs,”in Handbook of Combinatorial Designs, second edition ed. CRCPress, 2007, ch. 5, pp. 392–436.

[38] J. Myers and A. D. Well, inResearch Design and StatisticalAnalysis, 2nd ed. Lawrence Erlbaum, 2003.

[39] D. Wei and O. Milenkovic, “Subspace pursuit for compressivesensing signal reconstruction,”IEEE Trans. Inf. Theory, vol. 55,no. 5, pp. 2230–2249, May 2009.

DRAFT September 10, 2014

Page 44 of 44

IEEE Transactions on Communications

Under review for possible publication in

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960