Equalization in a wideband TDMA system Three basic equalization methods Linear equalization (LE)...
-
Author
leo-simpson -
Category
Documents
-
view
350 -
download
4
Embed Size (px)
Transcript of Equalization in a wideband TDMA system Three basic equalization methods Linear equalization (LE)...

Equalization in a wideband TDMA system
• Three basic equalization methods
• Linear equalization (LE)
• Decision feedback equalization (DFE)
• Sequence estimation (MLSE-VA)
• Example of channel estimation circuit

Three basic equalization methods (1)
Linear equalization (LE):Performance is not very good when the frequency response of the frequency selective channel contains deep fades.
Zero-forcing algorithm aims to eliminate the intersymbol interference (ISI) at decision time instants (i.e. at the center of the bit/symbol interval).
Least-mean-square (LMS) algorithm will be investigated in greater detail in this presentation.
Recursive least-squares (RLS) algorithm offers faster convergence, but is computationally more complex than LMS (since matrix inversion is required).

Three basic equalization methods (2)
Decision feedback equalization (DFE):Performance better than LE, due to ISI cancellation of tails of previously received symbols.
Decision feedback equalizer structure:
Feed-forward filter (FFF)
Feed-forward filter (FFF)
Feed-back filter (FBF)
Feed-back filter (FBF)
Adjustment of filter coefficients
Input Output
++
Symbol decision

Three basic equalization methods (3)
Maximum Likelihood Sequence Estimation using the Viterbi Algorithm (MLSE-VA):Best performance. Operation of the Viterbi algorithm can be visualized by means of a trellis diagram with m
K-1 states, where m is the symbol alphabet size and K is the length of the overall channel impulse response (in samples).
State trellis diagram
Sample time instants
State
Allowed transition between states

Linear equalization, zero-forcing algorithm
Raised cosine
spectrum
Raised cosine
spectrum
Transmitted symbol
spectrum
Transmitted symbol
spectrum
Channel frequencyresponse
(incl. T & R filters)
Channel frequencyresponse
(incl. T & R filters)
Equalizer frequency response
Equalizer frequency response
=
Z f B f H f E f
0 ffs = 1/T
B f
H f
E f
Z f
Basic idea:

Zero-forcing equalizer
Communication channel
Equalizer
FIR filter contains 2N+1 coefficients
r k z kTransmitted impulse sequence Input to
decision circuit
N
nn N
h k h k n
Channel impulse response Equalizer impulse response
M
mm M
c k c k m
Coefficients of equivalent FIR filter
( )M
k m k mm M
f c h M k M
(in fact the equivalent FIR filter consists of 2M+1+2N coefficients, but the equalizer can only “handle” 2M+1 equations)
FIR filter contains 2M+1 coefficients
Overall channel

Zero-forcing equalizer
We want overall filter response to be non-zero at decision time k = 0 and zero at all other sampling times k 0 :
1, 0
0, 0
M
k m k mm M
kf c h
k
0 1 1 2
1 0 1 2 1
1 1
2 1 2 2 1 1
2 2 1 1 0
... 0
... 0
:
... 1
:
... 0
... 0
M M M M
M M M M
M M M M M M
M M M M M
M M M M M
h c h c h c
h c h c h c
h c h c h c
h c h c h c
h c h c h c
This leads to a set of 2M+1 equations:
(k = –M)
(k = 0)
(k = M)

Minimum Mean Square Error (MMSE)
The aim is to minimize:2
kJ E e
ˆ ˆk k k k ke z b b z (or depending on the source)
EqualizerEqualizerChannelChannelkz kb
ke
r k s k
+
Estimate of k:th symbol
Input to decision circuit
z k b k
Error

MSE vs. equalizer coefficients
2
kJ E e
J
1c
2c
quadratic multi-dimensional function of equalizer coefficient values
MMSE aim: find minimum value directly (Wiener solution), or use an algorithm that recursively changes the equalizer coefficients in the correct direction (towards the minimum value of J)!
Illustration of case for two real-valued equalizer coefficients (or one complex-valued coefficient)

Wiener solution
opt Rc p
R = correlation matrix (M x M) of received (sampled) signal values
p = vector (of length M) indicating cross-correlation between received signal values and estimate of received symbol
copt = vector (of length M) consisting of the optimal equalizer coefficient values
(We assume here that the equalizer contains M taps, not 2M+1 taps like in other parts of this presentation)
We start with the Wiener-Hopf equations in matrix form:
kb
kr
kr

Correlation matrix R & vector p
Before we can perform the stochastical expectation operation, we must know the stochastical properties of the transmitted signal (and of the channel if it is changing). Usually we do not have this information => some non-stochastical algorithm like Least-mean-square (LMS) must be used.
*TE k k R r r
*kE k b p r
1 1, ,...,T
k k k Mk r r r rwhere
M samples

Algorithms
Stochastical information (R and p) is available:
1. Direct solution of the Wiener-Hopf equations:
2. Newton’s algorithm (fast iterative algorithm)
3. Method of steepest descent (this iterative algorithm is slow but easier to implement)
R and p are not available:
Use an algorithm that is based on the received signal sequence directly. One such algorithm is Least-Mean-Square (LMS).
opt Rc p 1opt
c R p Inverting a large matrix is difficult!

Conventional linear equalizer of LMS type
k Mr k Mr TT TT TT
LMS algorithm for adjustment of tap coefficients
Transversal FIR filter with 2M+1 filter taps
Estimate of k:th symbol after symbol decision
Complex-valued tap coefficients of equalizer filter
ke
+
kbkz
Mc 1 Mc 1Mc Mc
WidrowReceived complex
signal samples

Joint optimization of coefficients and phase
Equalizer filterEqualizer filter
Coefficient updating
Coefficient updating
Phase synchronization
Phase synchronization
ke
je
+
r kkz
kb
ˆ ˆexpM
k k k m k m km M
e z b c r j b
Minimize:
Godard
Proakis, Ed.3, Section 11-5-2
2
kJ E e

Least-mean-square (LMS) algorithm(derived from “method of steepest descent”) for convergence towards minimum mean square error (MMSE)
2
Re 1 ReRe
kn n
n
ec i c i
c
Real part of n:th coefficient:
2
Im 1 ImIm
kn n
n
ec i c i
c
Imaginary part of n:th coefficient:
2
1 kei i
Phase:
2 2 1 1M equations Iteration index Step size of iteration
2
k k ke e e

LMS algorithm (cont.)
After some calculation, the recursion equations are obtained in the form
ˆRe 1 Re 2 ReM
j jn n m k m k k n
m M
c i c i e c r b r e
ˆIm 1 Im 2 ImM
j jn n m k m k k n
m M
c i c i e c r b r e
ˆ1 2 ImM
jk m k m
m M
i i b e c r
ke

Effect of iteration step size
Slow acquisitionSlow acquisition
smaller larger
Poor tracking performance
Poor tracking performance
Poor stabilityPoor stability
Large variation around optimum
value
Large variation around optimum
value

Decision feedback equalizer
TT TT TT
LMS algorithm
for tap coefficient adjustment
TT TT
FFFFFF
FBFFBF +
+ ke
kb
kz
k Qb 1kb
k Mr k Mr
Mc 1 Mc 1Mc Mc
1Qq Qq1q?

The purpose is again to minimize
Decision feedback equalizer (cont.)
1
ˆ ˆ ˆQM
k k k m k m n k n km M n
e z b c r q b b
2 2
k kJ E e e
Feedforward filter (FFF) is similar to filter in linear equalizer tap spacing smaller than symbol interval is allowed => fractionally spaced equalizer => oversampling by a factor of 2 or 4 is common
Feedback filter (FBF) is used for either reducing or canceling (difference: see next slide) samples of previous symbols at decision time instants
tap spacing must be equal to symbol interval
where

The coefficients of the feedback filter (FBF) can be obtained in either of two ways:
Recursively (using the LMS algorithm) in a similar fashion as FFF coefficients
By calculation from FFF coefficients and channel coefficients (we achieve exact ISI cancellation in this way, but channel estimation is necessary):
Decision feedback equalizer (cont.)
1, 2, ,M
n m n mm M
q c h n Q
Proakis, Ed.3, Section 11-2
Proakis, Ed.3, Section 10-3-1

Channel estimation circuit
TT TT TT
LMS algorithm
Estimated symbols
+
Mc1Mc 1c0c
k Mb kb
krkr
ˆm mh c
Proakis, Ed.3, Section 11-3
k:th sample of received signal Estimated channel coefficients
Filter length = CIR length

Channel estimation circuit (cont.)
1. Acquisition phase Uses “training sequence”Symbols are known at receiver, .
2. Tracking phase Uses estimated symbols (decision directed mode) Symbol estimates are obtained from the decision circuit (note the delay in the feedback loop!) Since the estimation circuit is adaptive, time-varying channel coefficients can be tracked to some extent.
k kb b
Alternatively: blind estimation (no training sequence)

Channel estimation circuit in receiver
Channel estimation
circuit
Channel estimation
circuit
Equalizer & decision
circuit
Equalizer & decision
circuit
h m b k
b k
r k
Estimated channel coefficients
“Clean” output symbolsReceived signal samples
Symbol estimates (with errors)
Training symbols
(no errors)
Mandatory for MLSE-VA, optional for DFE

Theoretical ISI cancellation receiver
Precursor cancellation of future symbols
Precursor cancellation of future symbols
Postcursor cancellation of previous symbols
Postcursor cancellation of previous symbols
Filter matched to sampled channel impulse response
Filter matched to sampled channel impulse response
1ˆ ˆk k Qb b 1
ˆ ˆk P kb b
k Pr
+
kb
(extension of DFE, for simulation of matched filter bound)
If previous and future symbols can be estimated without error (impossible in a practical system), matched filter performance can be achieved.

MLSE-VA receiver structure
Matched filter
Matched filter
MLSE(VA)
MLSE(VA)
Channel estimation circuit
Channel estimation circuit
NW filter
NW filter
r t
f k
y k b k
f k
MLSE-VA circuit causes delay of estimated symbol sequence before it is available for channel estimation
=> channel estimates may be out-of-date (in a fast time-varying channel)

MLSE-VA receiver structure (cont.)
The probability of receiving sample sequence y (note: vector form) of length N, conditioned on a certain symbol sequence estimate and overall channel estimate:
21
2 21 01
1 1 ˆ ˆˆ ˆˆ ˆ, , exp22
N N K
k k n k nN Nk nk
p p y y f b
y b f b f
Metric to be minimized
(select best .. using VA)
Objective: find symbol
sequence that maximizes this
probability
This is allowed since noise samples are
uncorrelated due to NW (= noise whitening) filter
Length of f (k)Since we have AWGN
b

MLSE-VA receiver structure (cont.)
We want to choose that symbol sequence estimate and overall channel estimate which maximizes the conditional probability.
Since product of exponentials <=> sum of exponents, the metric to be minimized is a sum expression.
If the length of the overall channel impulse response in samples (or channel coefficients) is K, in other words the time span of the channel is (K-1)T, the next step is to construct a state trellis where a state is defined as a certain combination of K-1 previous symbols causing ISI on the k:th symbol.
K-10 k
f k Note: this is overall CIR, including response of matched
filter and NW filter

MLSE-VA receiver structure (cont.)
At adjacent time instants, the symbol sequences causing ISI are correlated. As an example (m=2, K=5):
1 0 0 1 0
1 0 0 1 0
0 0 1 0 0
0
1
At time k-3
At time k-2
At time k-1
:
:
Bits causing ISI not causing ISI at time instant
At time k
1
0 1 0 0 11 0 1
Bit detected at time instant16 states

MLSE-VA receiver structure (cont.)
State trellis diagram
k-2 k-1 kk-3
1Km
Number of states The ”best” state
sequence is estimated by
means of Viterbi algorithm (VA)
k+1
Of the transitions terminating in a certain state at a certain time instant, the VA selects the transition associated with highest accumulated probability (up to that time instant) for further processing.
Alphabet size
Proakis, Ed.3, Section 10-1-3