Pre Talk 1

120
Pre talk on “ Some Studies on Adaptive Decision Feedback Equalizer for Wireless Systems” By Ch.Sumanth Kumar Research Scholar Under the guidance of Prof. K.V.V.S Reddy Department of Electronics and Communication Engg A.U College of Engineering(Autonomous) Andhra University Visakhapatnam

Transcript of Pre Talk 1

Page 1: Pre Talk 1

Pre talk on

“ Some Studies on Adaptive Decision Feedback Equalizer for Wireless Systems”

By Ch.Sumanth Kumar

Research Scholar

Under the guidance of Prof. K.V.V.S Reddy

Department of Electronics and Communication Engg A.U College of Engineering(Autonomous)

Andhra UniversityVisakhapatnam

Page 2: Pre Talk 1

Adaptive Decision Feedback Equalizer ---

Background, Issues & Challenges

Contributions made by this thesis

Implementation of Modified fast block LMS Algorithm

Modified fast block LMS Algorithm based ADFE

ADFE using different variants of LMS algorithm

Normalized modified Block LMS based ADFE

Signed modified Block LMS based ADFE

Normalized Signed modified Block LMS based ADFE

Partial update Sign Normalized LMS based ADFE

Out Line

Page 3: Pre Talk 1

• Real time implementation of ADFE using TMS 320C6713

•Conclusions

•References

•List of publications from thesis

Page 4: Pre Talk 1

Adaptive Decision Feedback Equalizer - - Background

( )

Communication channels may be characterized by

( ) | ( ) |

Amplitude distortion results if ( ) is not constant

within the bandwidth of the signal.

Phase distortion results if (

cj H

c c

c

c

H H e

H

H

) is not linear function

of , i.e., the delay is not constant.

The result is signal dispersion (smearing).

The overlap of symbols owing to smearing is called ISI.

Page 5: Pre Talk 1

T T 0

Intersymbol Interference

Page 6: Pre Talk 1

A major cause of performance degradation in many

communication systems is the introduced ISI, due to

time - dispersive characteristics of the involved channels.

The problem is particularly import

ant in wireless

transmission systems due to multipath effects.

Ideally, if the Rx tranfer function is inverse of that

of channel it is possible to get back the undistorted

signal and make c

orrect decisions about transmitted

symbols.

is one functional unit that tries to nullify the ISI. Equalizer

Page 7: Pre Talk 1

Problems with linear equalizer

ka

kn

krˆka

ke

( )H z ( )C z

LinearEqualizer QuantizerChannel

ke

ka

kn

ISI

Noise

( ) ( ) 1H z C z

( )C z

( )a

( )b

Page 8: Pre Talk 1

2 2

The power spectrum of the error can be written as

| ( ) ( ) 1| | ( ) |

power spectrum of the data symbols

power spectrum of the noise process

1 If ( ) , the ISI contribution to

( )

e

e a n

a

n

S

S S H z C z S C z

S

S

C zH z

the error vanishes.

If ( ) has a spectran null i.e., ( ) 0 for some at any

frequency within the bandwidth of , the power of noise

is infinity.

Even without a spectral null, if some fre

k

H z H z z

a

quencies in ( )

are greatly attenuated then the equalizer will greatly

enhance the noise power.

H z

Page 9: Pre Talk 1

Decision Feedback Equalizer (DFE) is effective means

for equalizing the channels that exhibit spectral nulls.

+2

2 2.5 1

z

z z 2z

z

1z

1z

FFFChannel

Quantizer

0.5

0.25

FBF

( )fy n

( )by n

Decision Feedback Equalizer

Page 10: Pre Talk 1

1 2

The DFE employs a feedforward filter (FFF) to equalize

the anticausal part of the channel impulse response.

The channel - FFF cascade forms a causal system

with impulse response 1, , , . The feedh h

1 1 2 2

back

filter (FBF), with , works on

past decisions (assumed correct).

b bw h w h

The residual ISI at the FFF output (n) is cancelled

by subtracting FBF output (n) from (n).

f

b f

y

y y

Page 11: Pre Talk 1

In most of the communication systems the variation

of the channel characteristics over time is significant,

the equalizer should be able to adapt itself to combat

the ISI.

In such cas

es Adaptive DFE (ADFE) is used.

FFF and FBF coefficients are trained LMS algorithm.

Page 12: Pre Talk 1

LMS Algorithm• Self-learning: Filter coefficients adapt in response to

training signal.

W(z) +

–x(n)y(n)

e(n)

d(n)

• Filter update: Least Mean Squares (LMS) algorithm

Page 13: Pre Talk 1

1z

F B F

F F F + +x( )n

( )d n

( )fy n

( )by n

( )y n ( )v n

ˆ ( )y n

( )e n

TrainingDecision directed

0 1( ) [ ( ),..., ( )]f f f tpn w n w nw

1( ) [ ( ),..., ( )]b b b tqn w n w nw

Basic ADFE

Page 14: Pre Talk 1

A common problem faced by the ADFE is that

Increasing data rate increases channel IR

increases the order of FFF and FBF

increases complexity, makes real time operation difficult

Complexity further goes up for fast converging equalizers

such as those belonging to RLS family, which require a

reduced training sequence,

valuable saving in band width.

As the complexity inc

reases power and chip area

requirement also go up.

:Complexity issues and related research in ADFE

Page 15: Pre Talk 1

Complexity reduction of high speed ADFE remained

a topic of intense research over last two decades.

At block or architecture level, several pipelining and

parallel processing techniques has been developed by

, to achieve high processing speed. Parhi Wu et al

At algorithmic level, proposed some

block and ferquency domain based techniques recently.

Berberidis et al

Page 16: Pre Talk 1

Davidson et al. and Cioffi et al. proposed high speed ADFE,s but they do not track time varying channels effectively since the filter coefficients are adapted only once in every Mth sample, M being the block size .

Parhi ------------pipelining algorithms with quantizer loops. Here by employing look-ahead computation technique loops

containing nonlinear devices are transformed to equivalent forms which contain no nonlinear operation. But such implementations

are practical only for low order ADFE’s since the hardware complexity can become enormous for higher order filters.

Gatherer et al. proposed a parallel ADFE algorithm and was modified as extended LMS ADFE algorithm ---------the input data samples are

broken into M blocks of N samples each and are processed by M ADFE’s in parallel. Their algorithms, however, suffers from two counts, namely, incorrect initialization of FFF and a coding loss as extra samples are required to be transmitted for initializing the FBF.

Page 17: Pre Talk 1

Recently, Parhi and Lin independently proposed several architectures to implement ADFE for gigabit systems.

Berberidis et al. presented a new block ADFE that is mathematically equivalent to the conventional LMS based

sample by sample DFE but with considerably reduced computational load.

Shanbhag et al. proposed several high throughput architectures utilizing fine-grain pipelining of the arithmetic

elements. But fine grain pipelining of an ADFE is intrinsically difficult, since the ADFE output must be

available at the end of each iteration in order to cancel the effects of pre-cursor ISI.

Page 18: Pre Talk 1

Douglas. S.C proposed adaptive filters with partial updates to achieve faster convergence with low complexity, where only a part of the filter coefficients are updated in each iteration

Mahesh . G et al. proposed stochastic partial update LMS algorithm [78], where filter

coefficients are updated in random manner.

Dogancay.k et al. proposed selective partial update NLMS algorithm [31 ], where the selection

criterion is obtained from the solution of a constrained optimization problem

Page 19: Pre Talk 1

we have made an attempt to develop efficient realization of

Adaptive Decision Feedback Equalizers by considering different

combinations and variants of LMS algorithm to improve the

computational speed as well as to reduce the computational

complexity.

Page 20: Pre Talk 1

Contributios made by this thesis

Efficient realization of FFT based modified block LMS algorithm

Implementation of ADFE using modified block LMS algorithm

Normalized modified block LMS based ADFE.

signed versions of modified block LMS based ADFE

Normalized signed modified block LMS based ADFE

Page 21: Pre Talk 1

partial update sign normalized modified block LMS based

ADFE

ADFE is implemented in real time using TMS320C6713

DSP processor.

Page 22: Pre Talk 1

Basic ADFE Equations :

t

f t b t

ˆ y(n) = Q[y(n)],

( ) ( ) ( ) ,

( ) [ ( ) ( )] ,

( ) [ ( ) ( 1)] ,

t

t t t

y n n n

n n n

n n n

w Φ

w = w w

Φ x v

where,

( ) [ ( ), ( 1), , ( 1)] ,

( 1) [ ( 1), , ( )] .

t

t

n x n x n x n p

n v n v n q

x

v

Page 23: Pre Talk 1

ADFE Weight update equations (LMS) :

( 1) ( ) ( ) ( ),

where,

( ) ( ) ( ) : the error signal

: Algorithm step size

n n n e n

e n v n y n

w = w Φ

( ) ( ) [during training mode]

ˆ( ) [during decision directed mode]

v n d n

y n

Page 24: Pre Talk 1

FFT based modified fast Block LMS Algorithm

LMS algorithm ------which updates the filter coefficients by using an approximate version of the steepest descent

procedure.

computationally simple and desirable numerical qualities the LMS algorithm received a great deal of attention

despite the fact that its convergence behavior has been surpassed by several faster techniques.

This modified algorithm updates the filter coefficients on a block-by-block basis.

Page 25: Pre Talk 1

Input data ( ) : partitioned in non-overlapping

blocks of size P each

x n

th block , 0,1, ... , 1,

0,1, 2...

j n jP r r P

j

0 1 1( ) [ ( ), ( ), ... , ( )] : -th order

filter weight vector for the -th block

tLj w j w j w j L

j w

Filter coefficients updated over block to block,

constant within a block

Page 26: Pre Talk 1

The main operations-----filtering, output error computation and weight updating

substantial computational savings when compared with the algorithm which updates the filter coefficients sample–by-sample basis.

Block Adaptive filter

Page 27: Pre Talk 1

( ) ( ) ( ) : filter output at the -th index,

where ( ) [ ( ), ( 1), .... , ( 1)] ,

, 0,1, , 1.

t

t

y n j n n

n x n x n x n L

n jP r r P

w x

x

( ) ( ) ( ) : output error at the -th index,

where ( ) : desired response, given during training

e n d n y n n

d n

2

1

0

Filter coefficients are updated to minimize [ ( )]

progressively with . Update relation (BLMS) :

( 1) ( ) ( ) ( )P

r

E e n

n

j j jP r e jP r

w w x

Page 28: Pre Talk 1

A fast implementation via FFT is possible to

produce ( ), 0,1, , 1, and ( 1)y jP r r P j

w

2: Step size, for convergence, 0

[ ]

: [ ( ) ( )], i.e., input correlation matrix t

P tr

E n n

R

R x x

Page 29: Pre Talk 1

S/P

Sub-block of size

M=L+P-1

M pointFFT

M point IFFT(Last P terms)

Delay

Compute

M pointIFFT

Set last (P-1)elements zero

M pointIFFT

M pointFFT

Add (L-1) zerosat the front

P/S

Output

M pointFFT

x(n)

X(k)

w(j+1)

W(k)

X(k)y(n)

d(n)

e(n)

Fast implementation of the proposed BLMS Algorithm.

Page 30: Pre Talk 1

Block ADFE Equations :

Q,M Q,L

,

,

(jQ+Q-1) = X ( ) D ( ),

(jQ+Q-1) = { (jQ+Q-1)},

(jQ+Q-1) (jQ+Q-1) (jQ+Q-1)

( 1) ( ) (jQ+Q-1) ,

( 1) ( ) (jQ+Q-1).

f bQ M L

Q Q

Q Q Q

f f HM M Q M Q

b b HL L Q L Q

j j

f

j j X

j j D

y w w

d y

e y d

w = w e

w = w e

,

( 1) ..... ( )

. .

where, = . . ,

. .

( ) ..... ( 1)

Q M

x jQ Q x jQ Q M

X

x jQ x jQ M

Implementation of modified block LMS based ADFE

Page 31: Pre Talk 1

1 2, , 1 , 1

1, 1

2, 1

= [ ] with

( 2) ..... ( )

. .

= . . ,

. .

( 1) ..... ( 1)

( 1) ..... ( 1)

. .

= . .

. .

( ) ..... ( )

Q L Q Q Q L Q

Q Q

Q L Q

D D D

d jQ Q d jQ

D

d jQ d jQ Q

d jQ d jQ L Q

D

d jQ Q d jQ L

Page 32: Pre Talk 1

',

1

:

This consists of 3 main computations, namely,

(a) FFF output :

-- FFF output ( 1) ( )

-- Using overlap and save method

( 1) [ (

f fQ Q M M i

f dQ S S

jQ Q X j n Z

jQ Q F X

Q

Equalization and Weight updating

y w

y J .)] , where

-1,

([ ( ) ] ) and

( [ ( ) ... ( 1) ] )).

fS last Q

f f t t tS M S M

d tS S

S Q M

F j

X diag F x jQ Q S x jQ Q

W

W w 0

Page 33: Pre Talk 1

,

(b) FBF output:

Unlike FFF, FBF output ( 1) ( )

contains unknown decisions given by ( ),

,..., 2.

To avoide causality problem, the computation of

( 1) is systematic

b bQ Q L L

bQ

jQ Q D j

d k

k jQ jQ Q

jQ Q

y w

y

, 1

2 21

,2 1

ally decomposed into

two parts: one containing past and known decisions,

and the other involving purely the current and thus

unknown decisions.

( 1) ( )

( )

Q L Q

b bQ L Q

bQ Q

jQ Q D j

W j

y w

2 1( 1), where

Q jQ Q d

Page 34: Pre Talk 1

1 1

1 1

,2 1

1 1

21 1

0 ( ) ..... ( ) 0 ... 0

0 0 ( ) ... ( ) ... 0

. . . . . . . where, ( ) = ,

. . . . . . .

. . . . . . .

0 0 ... 0 ( ) ... ( )

( ) [ ( ) ( ) ... ( )].

Par

b bQ

b bQ

bQ Q

b bQ

b b b bL Q Q Q L

w j w j

w j w j

W j

w j w j

j w j w j w j

w

1 2,2 1 , , 1titioning ( ) [ ( ) ( )], the FBF output can be

written as,

b b bQ Q Q Q Q QW j W j W j

Page 35: Pre Talk 1

2 2 1, 1 1 ,

2, 1 1

1

-- ( 1) ( ) ( ) ( 1)

( ) ( 1), where,

( 1) [ ( 1) ... ( )] contains

unknown decisions and ( 1) [ ( 1)... ( 1)]

cont

b b bQ Q L Q L Q Q Q Q

bQ Q Q

Q

Q

jQ Q D j W j jQ Q

W j jQ

jQ Q d jQ Q d jQ Q

jQ d jQ d jQ Q

y w d

d

d

d

2 2 2, 1 1

1,1 1,

1,2 2, 1

1

ains Q-1 known decisions from previous sub-blocks.

-- Let FB2 output ( 1) ( ) ,

( 1) ( ) ( 1),

( 1) ( ) ( 1)and

-- FB1 output (

b bQ Q L Q L Q

b bQ Q Q Q

b bQ Q Q Q

bQ

jQ Q D j

jQ Q W j jQ Q

jQ Q W j jQ

jQ Q

y w

y d

y d

y 1,1 1,21) ( 1) ( 1).b bQ QjQ Q jQ Q y y

Page 36: Pre Talk 1

2

1,2

1,1

1,1

Let ( 1) ( 1) ( 1)

( 1)

Then, ( 1) ( 1) ( 1).

( 1) involves unknown decisions.

An iterative procedure is suggested by which

c f bQ Q Q

bQ

c bQ Q Q

bQ

jQ Q jQ Q jQ Q

jQ Q

jQ Q jQ Q jQ Q

jQ Q

Berberidis

y y y

y

y y y

y

1,1 First computes ( 1) using appropriately

chosen initial value for ( 1).

Then evaluates ( 1), which is then used to compute

( 1) using ( 1) { ( 1)}.

T

bQ

Q

Q

Q Q Q

jQ Q

jQ Q

jQ Q

jQ Q jQ Q f jQ Q

y

d

y

d d y1his is again used to compute ( 1) and then

( 1) and the iteration is carried out further.

bQ

Q

jQ Q

jQ Q

y

y

Page 37: Pre Talk 1

It is shown that this iteration converges to correct vector

( 1) in Q or less number of steps for any choice

of initial value.

A simple choice is to set the initial decision vector to

Q jQ Q

d

1,

zero vector (IS1).

In IS2 the initial value of ( 1) is chosen by setting

( 1) ( 1) and solving for ( 1)

using [ ( ) ] ( 1) ( 1).

Q

Q Q Q

cQ Q Q Q Q

jQ Q

jQ Q jQ Q jQ Q

W j I jQ Q y jQ Q

d

d y d

d

Page 38: Pre Talk 1

Q Q Q

The error vector is now computed as

( 1) ( 1) ( 1)jQ Q jQ Q jQ Q

e d y

Q-1

r=0

Q-1

r=0

(c) Weight updating :

( 1) ( ) μ (jQ + r)e(jQ + r)

( 1) ( ) μ (jQ + r)e(jQ + r) 2 j j

f fM M M

b bL L L

j j

j j

w w x

w w d

The proposed realizations are about

faster than a sample based realization for moderately large values of L, M and Q.

four times

Page 39: Pre Talk 1

The channel is modeled with a second order FIR filter, having impulse response 0.304 0.903 0.304.

The channel noise is modeled as AWGN . The transmitted symbols are chosen from an alphabet of 8 equispaced,

equiprobable discrete amplitude levels

The transmitted signal power was taken to be 6 dB.

To these symbols additive white Gaussian noise having a variance of 0.1 is added. The lengths of the FFF and the FBF

were chosen as p=3 and q=3.

Step size =0.001.

Simulation Studies

Page 40: Pre Talk 1

The ADFE was first simulated by the proposed scheme, choosing block length N as 25.

The ADFE was operated in training mode for the first 100 iterations and then, switched over to the decision directed mode for the subsequent

500 iterations.

The FFF and FBF weights are updated separately using weight updating equations.

The corresponding learning curve is obtained by plotting the MSE versus the number of iterations

Next, the MSE curves were plotted for different input block lengths of N = 10, 25, 50 and 100

Page 41: Pre Talk 1
Page 42: Pre Talk 1

Increasing block length

large spread in the magnitudes of the data samples in the block

more pronounced quantization noise effects via block formatting

Steady state MSE increases with N

Page 43: Pre Talk 1

Realization of Normalized modified Block LMS based ADFE

The normalized LMS algorithm provides good convergence behavior compared to basic LMS

algorithm.

The NLMS algorithm can be considered as a special case of slightly improved version of the LMS algorithm

which takes into account the variation in the signal level at the filter output by selecting a normalized step size parameter, resulting in a stable and fast converging

adaptive algorithm.

Page 44: Pre Talk 1

The NLMS algorithm estimates the energy of the input signal at each sample and normalizes the step size by this estimate, therefore selecting a step size

inversely proportional to the instantaneous input signal power.

The weight update equation for the NLMS algorithm is given by

( 1) ( ) ( ) ( ) ( )w n w n n e n X n

2

µ( )

( )n

x n

Where

Page 45: Pre Talk 1

The tap input vector is given by

( ) [ ( ), ( 1)...., ( 1)]tX n x n x n x n L

The error signal is given by

( ) ( ) ( ) ( )te n d n w n X n

The filter weight vector is given by

0 1 1( ) [ ( ), ( ),.... ( )]tLw n w n w n w n

Page 46: Pre Talk 1

Here the adaptation constant is with in the range 0 to 2 for convergence and is an appropriate positive number introduced to avoid divide-by-zero like situations

which may arise when the norm of the input signal becomes very small.

µ

the weight updating equation for the ADFE using NLMS algorithm can be modified and

written as,

( 1) ( ) ( ) ( ) ( )W n W n n n e n

Where ( ) [ ( ),... ( 1), ( 1),... ( )]tn x n x n p v n v n q

Page 47: Pre Talk 1

0 1 1( ) [ ( ), ( ),.... ( )]f f f f tpW n w n w n w n is a -th order FFF

coefficients p

1 2( ) [ ( ), ( ),.... ( )]b b b b tqW n w n w n w n

is a -th order FBF

coefficients

q

( ) [ ( ) ( )]f t bt tW n W n W n

The signal is given by a desired response ( )d n

during the initial training phase and by ˆ( )y n during the subsequent

decision directed phase

Page 48: Pre Talk 1

The overall output ( )is given by

( ) ( ) ( )t

y n

y n W n n

The output error

( ) ( ) ( ) e n v n y n

The feed forward filter output

( ) ( ) ( )f fy n w n x n

The feedback filter output,

( ) ( ) ( 1)b by n w n v n

Page 49: Pre Talk 1

Now the overall output ( )

which is the input to the decision device is,

( ) ( ) ( )f b

y n

y n y n y n

1) Initially transmit the known sequence.

2) Assume, initially both the FFF, FBF weights to be zero.

3) Find the output vector, which is the sum of the outputs of FFF, FBF.

4) Estimate the tap weight vector at each instant of time using normalized Modified block LMS algorithm.

5) Update the filter coefficients.

Page 50: Pre Talk 1

Computational Complexity

Number of computations required for step size evaluation

To evaluate the time varying step size recursively, the proposed scheme requires

2 MAC operations to compute 2( )x n

1 addition for 2( )x n

1 division for 2

µ

( )x n at each index n.

Page 51: Pre Talk 1

Number of computations required for weight vector :

updating ( )W n to ( 1)W n Require (i)(L+1) MAC

operations .Of these, one MAC operation is needed

to compute

2

µ( )

( )e n

x n

and a total of L MAC operations are required to calculate ( 1)W n

Page 52: Pre Talk 1

Number of computations required for evaluating filter output:

To compute the overall output total of L MAC

operations are required.

Parameter Operation

MAC Addition Division

Step size 2 1 1

Weight updating L+1 Nil Nil

Filter output L Nil Nil

Table : Number of operations required per iteration for evaluating step size, weight updating and filter output using NLMS algorithm.

Page 53: Pre Talk 1

100 200 300 400 500 600 700 800 900 1000-30

-25

-20

-15

-10

-5

0

5

10

Number of Iterations

MS

E (d

B)

LMS

NLMS

Figure : Learning curves for LMS and Normalized LMS base ADFE

Simulation Results

Page 54: Pre Talk 1

Consider =0.001

The learning curve of the proposed ADFE shows good convergence behaviour after 50 iterations, where as it takes

more than 100 iterations for the LMS based ADFE. The steady state MSE is also within the acceptable range.

Page 55: Pre Talk 1

Realization of Signed modified Block LMS based ADFE

There are three signed versions of LMS algorithm namely

signed regressor LMS

sign-sign LMS

sign LMS algorithms.

These algorithms provide less computational complexity compared to basic LMS algorithm

The proposed schemes are particularly suitable for implementation of ADFE with less computational

complexity.

Page 56: Pre Talk 1

The signed LMS algorithms that make use of the signum (polarity) of either the error or the input signal, or both,

have been derived from the LMS algorithm from the point of view of simplicity in implementation.

In all these algorithms there is a significant reduction in computing time, mainly pertaining to the time required

for multiplications

In sign sign algorithm, where the signum of the input is used in addition to the signum of the error signal, thus requiring

only one-bit multiplication or logical EX-OR function.

Page 57: Pre Talk 1

signed regressor LMS algorithm (SRLMS), in which the polarity of the input signal is used to adjust the

tap weight.

The weight updating equations:

Signed- regressor LMS algorithm: w(n + 1) = w(n) + µ sgn {x(n)}e(n)

Sign-Sign LMS algorithm: w(n + 1) = w(n) + µ sgn{x(n)} sgn{e(n)}

Sign LMS algorithm: w(n + 1) = w(n) + µ x(n) sgn{e(n)}

where sgn {. } is well known signum function.

The error signal is given by, e(n) = d(n) - y(n)

Page 58: Pre Talk 1

The sequence d(n) is called desired response availableduring initial training period and ‘µ’ is an appropriate step sizeto be chosen as 0 < µ < 2/trR for the convergence of thealgorithm.

Implementation Procedure

Initially during training mode the known sequence

( )d n is transmitted and both the FFF and FBF are trained by the appropriate sign based algorithms.

Then the output ( )y n which is the sum of both FFF and FBF outputs is computed

The error sequence ( )e n is estimated and filter coefficients are updated for each iteration.

Page 59: Pre Talk 1

S.NoDifferent Variants of Sign LMS algorithms

Operation

Additions/Subtraction

s

Shift Multiplication

1 The Sign L L Nil

2 The Signed-regressor L Nil 1

3 The Sign-Sign L Nil Nil

Table 5.1: No. of additions/subtractions, shifts and multiplications required for weight updating using sign, signed-regressor, and the sign-sign LMS algorithms.

Computational Complexity

Page 60: Pre Talk 1

Figure : Learning curves for LMS and signed regressor LMS(SRLMS) based ADFE

Page 61: Pre Talk 1

Figure 5.2: Learning curves for LMS and Sign LMS (SLMS) based ADFE.

Page 62: Pre Talk 1

Figure 5.3: Learning curves for LMS and Sign-Sign LMS (SSLMS) based ADFE.

Page 63: Pre Talk 1

Figure : MSE plots for signed regressor ADFE for block lengths N=10, 25, 50, 100.

Page 64: Pre Talk 1

Figure : MSE plots for of sign ADFE for block lengths N=10, 25, 50, 100.

Page 65: Pre Talk 1

Figure : MSE plots for of sign sign ADFE for block lengths N=10, 25, 50, 100.

Page 66: Pre Talk 1

The proposed schemes were simulated as before to study the effects of block formation of the equalizer

coefficients on the performance of the sign- LMS based ADFE. For this, the same simulation model and

environment as used earlier for ADFE is considered. The simulation results for different block

lengths(N=10,25,50 and 100),by allocating 8 bits to the weight vectors of FFF and FBF, keeping the step size as 0.001.The simulation results for LMS based ADFE and

its three variants considered above are presented in Figures.

Page 67: Pre Talk 1

Realization of Normalized Signed modified Block LMS based ADFE

Here ADFE is implemented by combining modified block LMS algorithm, normalized LMS algorithm and

signed versions of LMS algorithms.

The normalized signed regressor LMS algorithm (NSRLMS) is a counterpart of the NLMS algorithm,

derived from the signed regresser LMS algorithm (SRLMS), where the normalizing factor for the SRLMS equals the sum of the absolute values of the input signal

vector components

Page 68: Pre Talk 1

The weight update equation of the normalized signed regressor LMS algorithm (NSRLMS) can be obtained by modifying the weight update equation of SRLMS

algorithm and can be written as

2

µ( 1) ( ) sgn{ ( )} ( )

( )W n W n X n e n

x n

Here Data vector ( )X n is given by

( ) [ ( ), ( 1)........ ( 1)]tX n x n x n x n L

Page 69: Pre Talk 1

( )Sgn X n is given by

sgn{ ( )} [sgn{ ( )}, sgn{ ( 1)}........ sgn{ ( 1)}]t

X n x n x n x n L

The weight update equation of the normalized sign-sign LMS algorithm (NSSLMS) can be obtained by modifying the weight update equation of SSLMS algorithm and can be written as

2

µ( 1) ( ) sgn[ sgn{ ( )}sgn{ ( )}]

( )W n W n X n e n

x n

The weight update equation in the normalized sign-LMS algorithm (NSLMS) can be obtained by modifying the weight update equation of SLMS algorithm and can be written as

2( 1) ( )

µsgn{ ( )} ( )

( )W n nW e n X n

x n

Page 70: Pre Talk 1

Both feed forward and feedback filter coefficients are trained by the weight update equations of NSRLMS, NSSLMS and

NSLMS algorithms. Initially the training is imparted by a pilot sequence (Known transmitted sequence) during initial training mode and by the output decision during the

subsequent decision directed mode. The input to the FBF is during the initial training period and it is

during subsequent decision directed phase.

( )d n( )y n

( )v n( )d n

( )y n

The feed forward filter output ( )fy n is

( ) ( ) ( )f fy n w n x n

0 1( ) [ ( ),....... ( )]f f f tpW n w n w nwhere

Page 71: Pre Talk 1

The feed back filter output ( )by n is

( ) ( ) ( 1)b by n w n v n

Now the overall output, which is the input to the decision device, y(n) is,

( ) ( ) ( )f by n y n y n

For the L-th order FFF and FBF, to update the coefficients using LMS algorithm, L multiplications and L additions are

required. For error e(n) one addition is required. For the product

Computational Complexity:

( )e n one multiplication is required.

Page 72: Pre Talk 1

for the output ( )y n , L multiplications and L-1 additions are required. So per output a total of (2L+1)

multiplications and 2L additions are required. NLMS algorithm needs one

additional computation term 2( )x n

This extra computation involves only two squaring operations (two

multiplications), one addition and one subtraction, if we implement using

recursive structure

In the case of signed regressor LMS algorithm only one multiplication is needed for obtaining

the product ( )e n

Page 73: Pre Talk 1

In the case of other two LMS algorithms [SSLMS,SLMS] no multiplications are required if

is chosen as a power of two 2 l as this multiplication can be efficiently

implemented using shift registers.

S.No.Type of

Algorithm

Operation

Multiplica

tions

Additio

ns

Shifts

1 LMS 2L+1 2L Nil

2 NLMS 2L+3 2L+2 Nil

3 NSRLMS 1 2L+2 Nil

4 NSLMS Nil 2L+2 2L+2

5 NSSLMS Nil 2L+2 Nil

Table : Comparison of computational complexity for different LMS based Algorithms.

It is observed that the sign based algorithms are

largely free from multiplication

operation.

Page 74: Pre Talk 1

Results and Conclusions

The Mean squared error curves are compared for ADFE’s with LMS, Normalized Sign LMS(NSLMS),

Normalized Signed regressor LMS (NSRLMS), Normalized Sign-sign LMS(NSSLMS) algorithms

The ensemble averaging was performed over 100 independent trials of the experiment.

Step size µ =0.001 is considered.

Number of iteration were taken as 400. For first 100 samples the ADFE is on training mode and it is in decision

directed mode for the next 300 samples.

Page 75: Pre Talk 1

Figure : Learning curves for LMS and Normalized signed-regressor LMS based ADFE.

Page 76: Pre Talk 1

Figure : Learning curves for LMS and Normalized sign LMS based ADFE

Page 77: Pre Talk 1

Figure: Learning curves for LMS and Normalized sign-sign LMS based ADFE

Page 78: Pre Talk 1

Fig: Comparision of Bit Error Rate(BER)plot of Normalized Signed regressor LMS(NSRLMS) based

ADFE with LMS, Normalized LMS(NLMS)and Sign LMS(SLMS) based ADFE’s

Page 79: Pre Talk 1

Figure : Comparision of bit Error Rate(BER)plot of Normalized Sign LMS (NSLMS) based ADFE with LMS, Normalized LMS(NLMS)and Sign LMS(SLMS) based

ADFE’s.

Page 80: Pre Talk 1

Fig:Comparision of Bit Error Rate(BER)plot of Normalized sign-sign LMS(NSSLMS) based ADFE with

LMS, Normalized LMS(NLMS)and Sign LMS(SLMS) based ADFE’s.

Page 81: Pre Talk 1

Partial update Sign Normalized LMS based Adaptive Decision Feedback Equalizer

Here only a part of the filter coefficients are updated at each iteration, without reducing the order of the filter in a manner which degrades algorithm performance as little as

possible.

Two types of partial update LMS algorithms Periodic LMS algorithm

Sequential LMS algorithm

Page 82: Pre Talk 1

T.Aboulnasr et al. proposed M-Max-NLMS algorithm, where the filter coefficients are obtained from the minimization of a modified

a posteriori error expression.

T.Schertler et al. proposed selective block update NLMS algorithm which update the filter coefficients on a block basis.

Dogancay.k et al. proposed selective partial update NLMS algorithm where the selection criterion is obtained from the solution

of a constrained optimization problem.

Werner.S et al. proposed data selective partial updating NLMS algorithm which uses set membership filtering method.

Mahesh . G et al proposed stochastic partial update LMS algorithm where filter coefficients are updated in random manner.

Page 83: Pre Talk 1

Proposed Implementation

Let us assume that the feed forward and feedback filters are FIR of even length L.

Let the filter coefficients ( )W n

For the instant n the filter coefficients are separated as even and odd

indexed terms as

2 4 6( ) [ ( ), ( ), ( ),....... ( )]te LW n w n w n w n w n

1 3 5 1( ) [ ( ), ( ), ( ),....... ( )]to LW n w n w n w n w n

( ) [ ( ), ( )]e oW n W n W n

Page 84: Pre Talk 1

Let the input sequence of the filter ( )X n is

( ) [ ( ), ( 1), ( 2),........ ( 1)]tX n x n x n x n x n L

by separating this as even and odd sequences as

( ) [ ( 1), ( 3)........ ( 1)]teX n x n x n x n L

( ) [ ( ), ( 2)........ ( 2)]toX n x n x n x n L

The desired response ( )d n is given by

( ) ( ) ( )toptd n W n X n

Page 85: Pre Talk 1

where the optimum filter coefficients ( )optW n

is given by 1, 2, ,( ) [ ( ), ( ),.... ( )]t

opt opt opt L optW n W n W n W n

For odd n filter coefficients updated using partial update LMS algorithm (PLMS) are given by

( 1) ( ) ( ) ( )e e eW n W n e n X n

( 1) ( )o oW n W n

For even n the filter coefficients are

( 1) ( )e eW n W n

( 1) ( ) ( ) ( )o o oW n W n e n X n

Page 86: Pre Talk 1

The error sequence ( )e n

( ) ( ) ( )e n d n y n

is given by

The actual output of the filter is given by

( ) ( ) ( )ty n w n X n

The coefficient error vectors are defined as

( ) ( ) ( )e e eV n W n W opt

( ) ( ) ( )o o oV n W n W opt

( ) ( ) ( )V n W n W opt ( ) [ ( ), ( )]eo t

e oV n V n V n

Page 87: Pre Talk 1

The necessary and sufficient condition for stability of the recursion is given by

max

20

maxwhere is the maximum eigen value of the input signal correlation matrix

The adaptive filter coefficients are updated by the, Partial update Signed-regressor LMS algorithm (PSRLMS) as

( 1) ( ) sgn{ ( )} ( )W n W n n e n

Page 88: Pre Talk 1

Using Partial update Sign-Sign LMS algorithm (PSSLMS) as

( 1) ( ) sgn{ ( )}sgn{ ( )}W n W n n e n

and using Partial update Sign LMS algorithm (PSLMS) as

( 1) ( ) ( )sgn{ ( )}W n W n n e n

sgn{.} is well known signum function

{ ( )} [ { ( )}, { ( 1)}........ { ( 1)}]Sgn n Sgn n Sgn n Sgn n L

Page 89: Pre Talk 1

The weight updating equation using Partial update normalized Signed-regressor LMS algorithm

(NPSRLMS) is written as

( 1) ( ) ( )sgn{ ( )} ( )w n w n n n e n

( )n2

µ

( )x n is given by

and 2( ) ( ) ( )tx n X n X n µ is a step size control parameter, used to control the speed of convergence

and takes on values between 0 and 2 for

convergence

is an appropriate positive number introduced to avoid

divide-by-zero like situations which may arise when

2( )x n becomes very small.

Page 90: Pre Talk 1

The weight updating equation of Partial update normalized Sign-Sign LMS algorithm (NPSSLMS) can

be written as

( 1) ( ) ( )sgn{ ( )}sgn{ ( )}w n w n n n e n

The weight updating equation of Partial update normalized Sign LMS algorithm (NPSLMS) as

( 1) ( ) ( )sgn{ ( )}w n w n n e n

The both feed forward and feedback filter coefficients are trained by the weight update equations of all three types of

LMS based algorithms, i.e, partial update normalized signed-regressor, sign-sign, and sign LMS algorithms.

Page 91: Pre Talk 1

Initially the training is imparted by a pilot sequence during initial training mode

and by the output decision

( )d n( )y n during the

subsequent decision

directed mode.

The output ( )v n ( )d n= or ( )y n depending on whether it is the initial training

period or subsequent decision

directed phase.

Page 92: Pre Talk 1

The feed forward filter output ( )fy n is given by

( ) ( ) ( )f fy n w n x n

1( ) [ ( ),....... ( )]f f f tpW n w n w nwhere

The feed back filter output ( )by n is given by

( ) ( ) ( 1)b by n w n v n

The overall output, which is the input to the decision device is given by

( ) ( ) ( )f by n y n y n

Page 93: Pre Talk 1

Results and Conclusions

The proposed scheme is simulated to study the performance of the ADFE.

Transmitted signals taking values 1 with probability 0.5.

The random number generator provides this test signal and in the channel an additive white gaussion noise with zero mean

and variance of 0.001 is added.

The impulse response of the channel is considered as a raised cosine function

Page 94: Pre Talk 1

The initial filter coefficients of FFF and FBF are zero. At each iteration these coefficients are modified and at the beginning

of decision directed mode the filter coefficients of the last iteration of the training mode are taken as initial coefficients.

The signal after equalization is passed through the slicer .It quantizes the signal to 1 when the signal is greater than 0.5 and

quantizes the signal to -1 when the signal is less than 0.5.

Page 95: Pre Talk 1

0 20 40 60 80 100 120 1400.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8Frequency Response of the Channel

Frequency

Am

plit

ude

Figure : Frequency Response of the channel

Page 96: Pre Talk 1

0 500 1000 1500 2000 2500 3000 3500 4000 4500 50000

0.5

1Transmitted signal

0 1000 2000 3000 4000 5000 6000-2

0

2Observed signal

0 100 200 300 400 500 600 700 800 900 1000-2

0

2Equalizer output before slicer

0 100 200 300 400 500 600 700 800 900 10000

0.5

1Equalizer output after slicer

Figure7.2: Transmitted Signal, Observed Signal, Equalizer Output before and after slicer.

Page 97: Pre Talk 1

0 100 200 300 400 500 600 700 800 900 10000

0.2

0.4

0.6

0.8

1Error Signal After Slicer

0 100 200 300 400 500 600 700 800 900 10000

0.5

1Error Signal Before Slicer

Page 98: Pre Talk 1

100 200 300 400 500 600-80

-70

-60

-50

-40

-30

-20

-10

0

10

No.Of Iterations

MS

E

LMS

NPBLMS

Figure: MSE curves of LMS and normalized partial update block LMS(NPBLMS) based ADFE.

Random number generator provides the test signal. channel is modelled as AWGN of variance 0.01. The ensemble averaging was

performed over 100 independent trials of the experiment. The transmitted signals are taken as simple QPSK signals. N=600 samples

are generated and used to train the both FFF and FBF with 4 taps.

Page 99: Pre Talk 1

-3 -2 -1 0 1 2 3 4 5 6 7 810

-3

10-2

10-1

100

SINR, dB

Bit

Err

or R

ate

LMS

NSSPLMSNSRPLMS

NSPLMS

Figure: Comparision of Bit Error Rate(BER) curves

Page 100: Pre Talk 1

Implementation of Adaptive Decision feedback Equalizer using DSP processor TMS320C6713

The TMS320C6713 is a fast special purpose Texas Instruments(TI) floating point digital signal processor. It is based on very long instruction word (VLIW) architecture.

This architecture and the instruction set are well suitable for real time signal processing applications.

The main tool is TI’s DSP starter kit (DSK). It consists of code composer studio (CCS), which provides an integrated

development environment (IDE) and necessary software tools for bringing together the C compiler, assembler, linker, debugger and

so on. It has graphical capabilities and supports real time debugging. It provides an easy to use software tool to build and

debug programs.

Page 101: Pre Talk 1

The operating frequency is 225MHz.

16 Mbytes of synchronous DRAM,512 Kbytes of non-volatile Flash memory(256 Kbytes usable

in default configuration),4 user accessible LEDs and DIP switches

Internal memory includes a two-level cache architecture with 4 kB of level 1 program cache (L1P), 4 kB of level 1 data cache (L1D), and 256 kB of level 2 memory shared

between program and data space.

Page 102: Pre Talk 1

The ADFE is initially in training period and the training sequence is known to both the transmitter and the receiver. The

error signal is generated from the transmitted signal and the equalized signal. After some iterations the equalizer turn to

decision directed mode and the normal transmission begins and the coefficients of the FFF and FBF are updated based on the

output of the decision device. During training process a large step size (0.08) is chosen to attain fast initial convergence ,later the

step size is reduced to (0.02) in decision directed mode to maintain a low tracking error.

Page 103: Pre Talk 1

Figure : MSE curve using TMS320C6713 MSE is almost

negligible after 200 iterations

Page 104: Pre Talk 1

Summary of the Present Work

we have made an attempt to develop efficient realization of Adaptive Decision Feedback Equalizers by considering different combinations and variants of LMS algorithm.

An efficient realization of modified fast block LMS algorithm using FFT has been presented. The proposed scheme provides

considerable speed up over sample by sample update LMS algorithm. Faster evaluation of the filter outputs and weight updating equations are also derived. From the computational

complexity analysis, it is observed that the proposed modified FFT based fast block LMS algorithm is sixteen times faster

than the sample by sample update LMS algorithm.

Page 105: Pre Talk 1

ADFE is implemented using modified FFT based fast block LMS algorithm.In this method first the incoming data is partitioned into non overlapping blocks of length and corresponding to each block the weights of the both

FFF and FBF are evaluated and the error sequence is calculated. The overall output, which is the sum of the

outputs of both FFF and FBF is calculated.The ADFE is initially in training period and later it is switched to

decision directed mode. The computational complexity in terms of MAC operations are also presented.

Later we extended the modified Block LMS based treatment to NBLMS ADFE. This normalization provides certain advantages over the original LMS based ADFE. It

enjoys superior convergence behaviour over its LMS counterpart at the expense of certain additional

computations.

Page 106: Pre Talk 1

The computational complexity of this proposed algorithm is also analyzed. The learning curve shows

the significant improvement in the convergence characteristics.

Later we have extended the modified Block LMS based treatment to the SBLMS based ADFE. This provides

less computational complexity over the LMS algorithm by trading off the speed of convergence.

Next we have taken up the combination of normalized and signed versions of LMS algorithms, to reduce

complexity and to improve convergence characteristics

Page 107: Pre Talk 1

Later ADFE is implemented using sign normalized LMS algorithms with partial updating the filter

coefficients

Next ADFE is implemented on a real time TMS 320C6713 DSP processor

Page 108: Pre Talk 1

References :1. Haykin, S., Adaptive Filter Theory, Englewood Cliffs, NJ:

Prentice-Hall, 1991.

2. Berberidis, K., and P. Karaivazoglou, “An efficient block adaptive decision feedback equalizer implemented in the frequency domain,” IEEE Trans. Signal Processing, vol.

50, no. 9, pp. 2273-2285, Sept. 2002.

3. Elam. D., and C. Lovescu, “A Block Floating Point Implementation for an N – Point FFT on the

TMS320C55X DSP,” Texas Instruments Application Report, SPRA948, Sept. 2003.

4. Harrington, E. F., “A BPSK Decision-Feedback Equalization Method Robust to Phase and Timing

Errors,” IEEE Signal Processing Lett. Vol.12, no.4, pp. 313 – 316, Apr. 2005.

Page 109: Pre Talk 1

5.Kavitha, V., and V. Sharma, “Tracking Analysis of an LMS Decision Feedback Equalizer for a wireless

Channel,” Technical Report No.TR-PME-2006-19, DRDO – IISc Program on mathematical engineering,

IISc, Bangalore, October 2006,

6. Khong A. W. H., and P. A. Naylor, “Selective – tap adaptive filtering with performance analysis for identification of time – varying systems,” IEEE Trans. Audio Speech Language Processing, vol.15, no.

5, pp. 1681 – 1695, July 2007.

7. Lin, C. H, A. Y. Wu and F.M. Li, “High–Performance VLSI Architecture of Decision Feedback Equalizer for Gigabit Systems,” IEEE Trans. Circuits

Syst. II., Vol. 53, no. 9, pp. 911–915, Sept. 2006.

Page 110: Pre Talk 1

8.Mahesh Godavarti, Alfred O. Hero, III “Partial Update LMS Algorithms” IEEE Trans. Signal

Processing, vol. 53, no.7, July 2005.

9. Parhi, K. K., “Designing of Multi gigabit MultiplexerLoopBased Decision Feedback

Equalizers,” IEEE Trans. Very Large Scale Integration Systems, vol. 13, no.4, pp. 489-493, April

2005.

10. Parhi, K. K., VLSI Digital Signal Processing System, Wiley- interscience, New York 1999.

11. Reuter, M., et. al., “Mitigating Error Propagation Effects in a Decision Feedback Equalizer,” IEEE Trans. Commun. vol. 49, no.11, pp. 2028-2041, Nov.

2001.

Page 111: Pre Talk 1

12.Rontogiannis, A. A. and K. Berberidis, “Efficient decision feedback equalization for sparse wireless

channels,” IEEE Trans. Wireless Communications, vol. 2, no. 3, pp. 570-581, May 2003.

13. Wu, W.R., and Tsuie, Y.M.: ‘An LMS-based decision feedback equalizer for IS-136 receivers’, IEEE

Trans. Commun., 2002, 51, pp 130-143.

Page 112: Pre Talk 1

List of Publications

JOURNALS[01] Ch. Sumanth Kumar, K.V.V.S. Reddy, “Low Complexity

Adaptive Equalization Techniques for Nonstationary Signals”, Journal of Communication and Computer, vol.6, No.11,2011, ISSN 1548-

7709, USA.

[02] Ch. Sumanth Kumar, Rafi Ahamed Shaik, K.V.V.S. Reddy, “Normalized Signed Regressor Partial update LMS based Adaptive

Decision Feedback Equalization”, International Journal of Emerging Technologies And Applications in Engineering, technology And

Sciences (IJ-ETA-ETS), ISSN: 0974-3588 , Jan’11 – June ’11 ,Volume 4 : Issue 1, P.P NO.48-52.

Page 113: Pre Talk 1

[03] Ch. Sumanth Kumar, D.Madhavi, K.V.V.S. Reddy, “An Efficient Realization of Normalized Block LMS based

ADFE”, Advances in Wireless and Mobile Communications, ISSN 0973-6972 Volume 4, Number 1 (2011), pp. 11–18.

[04] Ch. Sumanth Kumar, K.V.V.S. Reddy, “Optimized Adaptive equalizer for Wireless Communications”,

International Journal of computer applications, USA, Number 16, ISBN: 978-93-80746-57-8, pp.29-33, 2011.

[05] Ch. Sumanth Kumar, K.V.V.S. Reddy, Block based Partial update NLMS Algorithm for Adaptive Decision

Feedback Equalization, “International Journal of Signal and Image Processing”, Communicated.

Page 114: Pre Talk 1

CONFERENCES

[06] Ch. Sumanth Kumar, K.V.V.S. Reddy, “Block and Partial Update Sign Normalized LMS Based Adaptive Decision Feedback Equalizer”, in proc. 2011 International Conference on Devices & Communications (ICDeCom-11), Birla Institute Of Technology, Mesra,ranchi, IEEE Xplore. IEEE Catalog Number: CFP1109M-

ART,ISBN: 978-1-4244-9190-2, DOI: 10.1109/ ICDECOM. 2011. 5738469, Feb.24th -25th 2011.

[07] Ch. Sumanth Kumar, K.V.V.S. Reddy, “Optimized Adaptive equalizer for Wireless Communications”, International Conference on

VLSI, Communication& Instrumentation (ICVCI2011), Kottayam,Kerala. April 7th -9th 2011.

Page 115: Pre Talk 1

[08] Ch. Sumanth Kumar, Rafi Ahamed Shaik, K.V.V.S. Reddy, “A New Sign Normalized Block based Adaptive Decision feedback

Equalizer for Wireless Communication Systems”, 2010 IEEE International Conference on Computational Intelligence and

Computing Research (ICCIC), Coimbatore, IEEE Xplore IEEE Catalog Number: CFP1020J-ART ISBN: 978-1-4244-5967-4.Dec

28th -29th 2010.

[09] Ch. Sumanth Kumar, K.V.V.S. Reddy, “Partial Update Sign LMS Based Adaptive Decision Feedback Equalizer”, Second

International Conference On Advanced Computing &Communication Technologies for High Performance

Applications,Federal institute of science& technology, angamaly, cochin,kerala, 7th -10th December 2010

Page 116: Pre Talk 1

[10] Ch. Sumanth Kumar, D. Madhavi, N. Jyothi, “High Performance Architectures for Recursive Loop Algorithms”,

International Conference on Control, Automation, Communication and Energy Conservation-INCACEC’09 , Kongu Engineering College,

Perundurai, Erode, IEEE Xplore,4th - 6th June 2009

[11] Ch. Sumanth Kumar, K.V.V.S. Reddy, “Pipelining and Parallel Computing Architectures of Equalizers for Gigabit Systems”,

International Conference On Advanced Computing &Communication Technologies for High Performance Applications, Organized by federal

institute of science& technology, Angamaly, cochin,kerala, 660-664,24th -26th Sept.2008.

Page 117: Pre Talk 1

[12] Ch. Sumanth Kumar, Rafi Ahamed Shaik, K.V.V.S. Reddy, “A New Normalized Block LMS based Adaptive Decision feedback

Equalizer for Wireless Communications”, International Conference on Convergence of Science&Engineering in Education and Research ‘A Global perspective in the new millennium’ ICSE 2010 , Dayananda

Sagar Institutions,Bangalore, 21st -23rd April 2010.

[13] Ch. Sumanth Kumar, K.V.V.S Reddy, Rafi Ahamed Shaik, “Low Complexity Adaptive Equalization Techniques for non-stationary

signals”, International conference on advances in Information, Communication technology and VLSI Design,ICAICV2010,PSG

College of Technology,Coimbatore, Page No.49,Aug 6th -7th 2010

Page 118: Pre Talk 1

[14] Ch. Sumanth Kumar, D. Madhavi, N. Jyothi, “Computational approaches for Real time High Speed

Implementation of Quantization Algorithms”, 2010 IEEE International Conference on Computational Intelligence and

Computing Research (ICCIC) .Tamilnadu College Of Engineering,Coimbatore, IEEE Xplore IEEE Catalog Number:

CFP1020J-ART ,ISBN: 978-1-4244-5967-4,Dec 28th -29th 2010

[15] Ch. Sumanth Kumar, K.V.V.S. Reddy, “A New Normalized Signed LMS based Adaptive Decision Feedback Equalizer”, National Conference on Electronics, Communications, and

Computers (NCECC-2009) , organized by IETE Navi Mumbai Sub-Centre, 78-81,13th-14th February 2009.

Page 119: Pre Talk 1

[16] Ch. Sumanth Kumar, Dr. K. V. V. S. Reddy, P. Naga Lingeswara Rao, “An Efficient Realization of

Normalized Block LMS based ADFE”, National Conference on Signal Processing and Communication

Systems NCSPCS2010, RVR&JC, College of Engineering ,Guntur, P.No .62,February 25-26, 2010

[17] Ch. Sumanth Kumar, K.V.V.S. Reddy, “Efficient VLSI Architectures for High speed Nonlinear Adaptive Equalizers”, National conference on Signal Processing

&Communication Systems, organized by Department of ECE, R.V.R &J.C College of Engineering, Guntur, 227-

231, 20th –21st February 2008.

Page 120: Pre Talk 1

Thank You