Handout Ws0809

download Handout Ws0809

of 25

Transcript of Handout Ws0809

  • 7/28/2019 Handout Ws0809

    1/25

    Adaptive Systems, Problem Classes

    Christian Feldbauer, [email protected], phone 8734435Signal Processing and Speech Communication Laboratory, Inffeldgasse 16c/II

    last modified on January 27, 2009 for Winter term 2008/09

    1

  • 7/28/2019 Handout Ws0809

    2/25

    Organizational Information

    Course Webpage

    http://www.spsc.tugraz.at/courses/adaptive/

    You can possibly find a newer version of this document there.

    Newsgroup

    There is a newsgroup for the discussion of all course-relevant topics at:news:tu-graz.lv.adaptive

    Schedule

    Eight meetings ( 90 minutes each) on Tuesdays from 12:15 to 13:45 in lecture hall i11. Pleaserefer to TUGonline to get the actual schedule.

    Grading

    Three homework assignments consisting of analytical problems as well as MATLAB simulations(33 or 34 points each, 100 points in total without bonus problems). Solving bonus problemsgives additional points. Work should be done in pairs.

    achieved points grade

    88 175 . . . 87 262 . . . 74 349 . . . 61 4

    48 5

    Prerequisites

    (Discrete-time) Signal Processing (FIR/IIR Filters, z-Transform, DTFT, . . . )

    Stochastic Signal Processing (Expectation Operator, Correlation, . . . )

    Linear Algebra (Matrix Calculus, Eigenvector/-value Decomposition, Gradient, . . . )

    MATLAB

    2

  • 7/28/2019 Handout Ws0809

    3/25

    Contents

    1 The Optimum Linear Filtering ProblemLS and Wiener Filters 41.1 Transversal Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.2 The Linear Filtering Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    1.3 Least-Squares Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.4 The Wiener Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.5 System Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.6 System Identification in a Noisy Environment . . . . . . . . . . . . . . . . . . . 71.7 Iterative Solution without Matrix InversionGradient Search . . . . . . . . . . 8

    2 Adaptive Transversal Filter Using The LMS Algorithm 82.1 The LMS Adaptation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    2.1.1 How to choose ? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.1.2 LMS-Adaptive Transversal Filter . . . . . . . . . . . . . . . . . . . . . . 9

    2.2 Normalized LMS Adaption Algorithm . . . . . . . . . . . . . . . . . . . . . . . 9

    2.3 MATLAB Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    3 System Identification and LMS Performance 113.1 System Identification Using an Adaptive Filter . . . . . . . . . . . . . . . . . . 113.2 LMS Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    3.2.1 Convergence Time Constant . . . . . . . . . . . . . . . . . . . . . . . 113.2.2 Minimum Error, Excess Error, and Misadjustment . . . . . . . . . . . . 113.2.3 LMS-Adaptive Filter Design Trade-Off . . . . . . . . . . . . . . . . . . . 12

    3.3 MATLAB Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    4 Interference Cancelation 164.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.2 MATLAB Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    5 Adaptive Linear Prediction 185.1 Autoregressive spectrum analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 185.2 Linear prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185.3 Yule-Walker Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185.4 Periodic Interference Cancelation without an External Reference Source . . . . 195.5 MATLAB Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    6 Adaptive Equalization 206.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206.2 Decision-Directed Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216.3 MATLAB Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226.5 More ProblemsAlternative Equalizer Structures . . . . . . . . . . . . . . . . . 22

    A Moving Average (MA) Process 24

    B Autoregressive (AR) Process 24

    3

  • 7/28/2019 Handout Ws0809

    4/25

    1 The Optimum Linear Filtering ProblemLeast-Squares andWiener Filters

    1.1 Transversal Filter

    We write the convolution sum as an inner vector product

    y[n] =N1k=0

    ck[n]x[n k] = cH[n]x[n].

    n . . . time index, n Zx[n] . . . input sample at time n

    x[n] =

    x[n], x[n 1], . . . , x[n N + 1]T

    . . . tap-input vector at time n

    cH[n] = (c)T [n] =

    c0[n], c1[n], . . . , c

    N1[n]

    . . . coefficient vector at time n (transposed T and

    conjugated )

    y[n] . . . output sample at time nN . . . number of coefficients, length of tap-input vectorN 1 . . . number of delay elements, order

    x[n] x[n 1] x[n 2] x[n N + 1]

    c0[n] c1[n] c

    2[n] c

    N1[n]

    y[n]

    z1 z1z1

    Figure 1: Transversal filter structure.

    Special case: c[n] = c time-invariant FIR filter of order N 1

    1.2 The Linear Filtering Problem

    The problem is to approximate a desired signal d[n] by filtering the input signal x[n]. Forsimplicity, we first consider a fixed (i.e., non-adaptive) filter c[n] = c.

    x[n] y[n]

    d[n]

    e[n]c =

    c0...cN1

    Figure 2: The linear filtering problem.

    Find the optimum filter coefficients c. Have we defined OPTIMALITY yet?

    4

  • 7/28/2019 Handout Ws0809

    5/25

    1.3 Least-Squares Filters

    Consider a (finite) set of observations of {x[n]} and of {d[n]} is given (e.g., all past samplesfrom n = 0 to now). We define a deterministic cost function as

    JLS(c) =

    n

    k=0

    |e[k]|2,

    and the problem is to find those filter coefficients that minimize this cost function:

    cLS = argminc

    JLS(c).

    Problem 1.1: The following input/output measurements performed on a black box are given:

    x[n] d[n]

    ?

    n x[n] d[n]

    0 -1 -3

    1 -1 -52 1 0

    Find the optimum Least-Squares Filter with N = 2 coefficients. Use matrix/vector notationfor the general solution.

    Problem 1.2: The previous problem has demonstrated that gradient calculus is important.To practice this calculus, determine cJ(c) for the following cost functions:

    1. J(c) = K

    2. J(c) = cTv = vTc = c, v

    3. J(c) = cTc = ||c||2 = c, c

    4. J(c) = cTAc, where AT = A .

    MATLAB Exercise 1.1Exponentially-Weighted Least Squares:(i) For the linear filtering problem shown before, derive the optimum filter coefficients c in thesense of exponentially weighted least squares, i.e., find c[n] = argminc J(c, n), where the costfunction is

    J(c, n) =n

    k=nM+1

    nk |e[k]|2

    with the so-called forgetting factor 0 < 1. Use vector/matrix notation. Hint: a diagonalweighting matrix may be useful. Explain the effect of the weighting and answer for whatscenario(s) such an exponential weighting may be meaningful.(ii) Write a MATLAB function that computes the optimum filter coefficients in the sense ofexponentially weighted least squares according to the following specifications:

    function c = ls_filter( x, d, N, lambda)

    % x ... input signal

    % d ... desired output signal (of same length as x)

    % N ... number of filter coefficients

    % lambda ... optional "forgetting factor" 0

  • 7/28/2019 Handout Ws0809

    6/25

    (iii) Identification of a Time-Varying SystemFor the unknown system, implement a filter with the following time-varying 3-sample impulseresponse:

    h[n] =

    1

    1 + 0.002 n

    1 0.002 n

    .

    Generate 1000 input/output sample pairs (x[n] and d[n] for n = 0 . . . 999) using stationarywhite noise with zero mean and variance 2x = 1 as the input signal x[n]. All delay elementsare initialized with zero (i.e., x[n] = 0 for n < 0). The adaptive filter has also 3 coefficients(N = 3). By calling the MATLAB function ls filter with length-M segments of both x[n]and d[n], the coefficients of the adaptive filter c[n] for n = 0 . . . 999 can be computed. Visualizeand compare the obtained coefficeints with the true impulse response. Try different segmentlengths M and different forgetting factors . Compare and discuss your results and explain theeffects of M and (e.g., M {10, 50}, {1, 0.1}).

    1.4 The Wiener FilterWe consider x[n] and d[n] as stationary stochastic processes (for stationary conditions only!).The cost function is now stochastic:

    JMSE(c) = E

    |e[n]|2

    . . . Mean Squared Error (MSE)

    Optimum:

    cMSE = argminc

    JMSE(c).

    Problem 1.3: Find cMSE, i.e., the Wiener-Hopf equation. What statistical measurementsmust be known to get the solution?

    Problem 1.4: In order to get the MSE-optimal coefficients, the first N samples of rxx[k], theauto-correlation sequence of x[n], and the cross-correlation between the tap-input vector x[n]and d[n] need to be known. This and the next problem are to practice the computation ofcorrelations.Let the input signal be x[n] = A sin(n+), where is a random variable, uniformly distributedover ] , ].(i) Calculate the auto-correlation sequence rxx[k].(ii) Write the auto-correlation matrix Rxx for a Wiener-filtering problem with N = 1, N = 2,and N = 3 coefficients.(iii) Answer for these 3 cases, whether the Wiener-Hopf equation can be solved or not?

    (iv) Repeat the last tasks for the following input signal: x[n] = Aej(n+)

    .Problem 1.5: The input signal x[n] is now white noise v[n] filtered by an FIR filter withimpulse response g[n].

    v[n] x[n]

    g[n]

    Find the auto-correlation sequence rxx[k].

    6

  • 7/28/2019 Handout Ws0809

    7/25

    Principle of Orthogonality:

    The estimate y[n] of the desired signal d[n] (stationary process) is optimal in thesense of a minimum mean squared error if, and only if, the error e[n] is orthogonalto the input x[n m] for m = 0 . . . N 1.

    Problem 1.6: Show that the Orthogonality Principle is true.

    Corollary to the Principle of Orthogonality:

    When the filter operates in its optimum condition, also the error e[n] and theestimate y[n] are orthogonal to each other.

    Problem 1.7: Show that this corollary is true.

    Problem 1.8: Show that the minimum mean squared error equals JMMSE = JMSE(cMSE) =E

    |d[n]|2

    pHR1xx p.

    1.5 System Identification

    x[n]

    y[n]

    d[n]

    e[n]c

    h

    Figure 3: The system identification problem in a noise-free environment.

    Let d[n] = hHx[n] where h is the impulse response of the system to be identified.

    Problem 1.9Undermodeling: Let the order of the unknown system be 1 (d[n] = h0x[n]+h1x[n 1]) but the Wiener filter is just a simple gain factor (y[n] = c0x[n]). Determine theoptimum value for this gain factor. The auto-correlation sequence rxx[k] of the input signal isknown. Consider the cases when x[n] is white noise and also when x[n] is not white.

    Problem 1.10Undermodeling (generalized): Repeat the previous problem when theorder of the unknown system is M 1 and the order of the Wiener filter is N 1 with N < M.

    Use vector/matrix notation!

    Problem 1.11Overmodeling: Repeat the previous problem when the order of the Wienerfilter is greater than the order of the unknown system, i.e., N > M.

    1.6 System Identification in a Noisy Environment

    Let d[n] = hHx[n] + [n] where h is the impulse response of the system to be identified and[n] is stationary, additive noise.

    Problem 1.12: Show that the optimal coefficient vector of the Wiener filter equals h if, andonly if, [n] is orthogonal to x[n m] for m = 0 . . . N 1.

    Problem 1.13: Under what condition is the minimum mean squared error equal to JMMSE =JMSE(cMSE) = E

    |[n]|2

    ?

    7

  • 7/28/2019 Handout Ws0809

    8/25

    x[n]

    y[n]

    d[n]

    e[n]c

    h

    [n]

    Figure 4: The system identification problem in a noisy environment.

    1.7 Iterative Solution without Matrix InversionGradient Search

    Cost Function: JMSE(c) = E|d[n] cHx[n]|2

    = E

    |d[n]|2

    2pHc + cHRxxc

    Gradient: cJMSE(c) = 2 (Rxxc p)The Gradient Search Method (Method of Steepest Descent)

    c[n] = c[n 1] + (p Rxxc[n 1])

    searches along the direction of the negative gradient cJMSE(c)c=c[n1]

    = p Rxxc[n 1].

    Problem 1.14: Show that this algorithm converges toward the MSE-optimal coefficient vectorcMSE. Under what condition on does it converge?

    Problem 1.15: Calculate the convergence time constant(s) of the decay of the misalignment

    vector v[n] = c[n]cMSE (or coefficient deviation) when the gradient search method is applied.Problem 1.16: (i) Express the MSE as a function of the misalignment vector v[n] = c[n] cMSE. (ii) Find an expression for the learning curve JMSE[n] = JMSE(c[n]) when the gradientsearch is applied. (iii) Determine the time constant(s) of the learning curve.

    2 Adaptive Transversal Filter Using The LMS Algorithm

    2.1 The LMS Adaptation Algorithm

    c[n] = c[n 1] + e[n] x[n]

    c[n] . . . new coefficient vectorc[n 1] . . . old coefficient vector . . . step-size parametere[n] . . . error sample at time n

    x[n] . . . tap-input vector at time n

    2.1.1 How to choose ?

    Sufficient (deterministic) stability condition (see lecture course!):

    0 < 0.

    Problem 3.3Design Example: For a noisy system identification problem, the following

    two measurements could be accomplished at the plant: 2x = 1 and 2d = 2. x[n] and [n] canbe assumed to be stationary, zero-mean, white noise processes and orthogonal to each other.

    14

  • 7/28/2019 Handout Ws0809

    15/25

  • 7/28/2019 Handout Ws0809

    16/25

    4 Interference Cancelation

    4.1 Principle

    interference

    source

    -

    adaptive

    filter

    signalsource

    primary

    sensor

    reference

    sensor

    x[n]

    h[n]

    d[n] = [n] + (x h)[n]

    y[n] (x h)[n]

    e[n] [n]

    [n]

    x[n]

    Figure 8: Adaptive noise canceler.

    The primary sensor (i.e., the sensor for the desired signal d[n]) receives the signal of interest[n] corrupted by an interference that went through the so-called interference path. Whenthe isolated interference is denoted by x[n] and the impulse response of the interferencepath by h[n], the interference obtained by the primary sensor is (x h)[n]. In total, thesensor receives

    d[n] = [n] + (x h)[n].

    For simplification we assume E{[n]x[n k]} = 0 for all k.

    The reference sensor receives the isolated interference x[n].

    The error signal is e[n] = d[n]y[n] = [n]+(xh)[n]y[n]. The adaptive filtering operationis perfect ify[n] = (xh)[n]. In this case the system output is e[n] = [n], i.e., the isolatedsignal of interest.

    FIR model for the cross path: if we assume that

    (x h)[n] =

    Nhk=0

    h[k]x[n k],

    the interference cancelation problem is equal to the system identification problem.

    4.2 MATLAB Exercises

    MATLAB Exercise 4.1: Simulate an interference cancelation problem according to Fig. 8(e.g. speaker next to noise source, 50Hz interference in electrocardiography, babys heart-beat corrupted by mothers . . . ). Additionally, simulate the possibly realistic scenario wherea second cross path exists such that the reference sensor receives x[n] +

    N2k=0 h2[k][n k].

    MATLAB Exercise 4.2Canceling a Periodic Interference:

    To implement the filter structure shown in Fig. 9, you have to modify your MATLABfunction of the LMS algorithm. Instead of the transversal filter you need a 90-shifter. You

    16

  • 7/28/2019 Handout Ws0809

    17/25

    90shift

    o

    LMS

    from 50Hz ACpower supply

    reference input

    ECGoutput

    from ECG preamplifier:ECG signal + 50Hz interference

    primaryinput

    y[n] e[n]

    c0

    c1

    Figure 9: Canceling a periodic interference with an external reference sourcean adaptive notchfilter.

    may use the MATLAB expression x90=imag(hilbert(x)) . Compare the SN R of the primaryinput signal and the SN R of the output signal. Measure the convergence time constant andcompare your result with the theory. What is the advantage of the 90-shifter over a unit-delayelement in a transversal filter (see also Problem 4.1)?

    4.3 Problems

    Problem 4.1: We use a first-order transversal filter to eliminate an interference of the fAC = 50Hz AC power supply from an ECG signal. The sampling frequency is fs (fs > 100Hz).(i) Using the method of equating the coefficients, express the optimum coefficients c0 and c1that fully suppress the interference in terms of |H(ejAC)| and arg H(ejAC), i.e., in terms of themagnitude and the phase of the frequency response of the interference path at the frequencyof the interference.(ii) Calculate the auto-correlation sequence rxx[k] of the reference input signal as a function ofthe sampling frequency and build the auto-correlation matrix Rxx.(iii) Determine the cross-correlation vector p and solve the Wiener-Hopf equation to obtain

    the MSE-optimal coefficients. Show that these coefficients are equal to those found in (i).(iv) Determine the condition number = maxmin of the auto-correlation matrix for the givenproblem as a function of the sampling frequency. Is the unit delay in the transversal filter aclever choice?(v) For a given sampling frequency of fs = 8000 Hz, modify the delay such that the conditionnumber of the resulting auto-correlation matrix is minimized.

    17

  • 7/28/2019 Handout Ws0809

    18/25

    AR process

    -

    adaptive

    filter, LMS

    u[n]

    y[n]

    e[n]

    d[n]

    x[n]z1

    Figure 10: Linear prediction using an adaptive filter.

    5 Adaptive Linear Prediction

    5.1 Autoregressive spectrum analysis

    When we assume u[n] to be an AR process

    u[n] = v[n] L

    k=1

    aku[n k]

    (see appendix B), we can estimate the AR coefficients a1, . . . , aL by finding the MSE-optimalcoefficients of a linear predictor (see below). Once the AR coefficients have been obtained,the squared-magnitude frequency response of the recursive process-generator filter can be used

    as an estimate of the power-spectral density (PSD) of the process u[n] (sometimes called ARModeling).

    5.2 Linear prediction

    A linear predictor tries to predict the present sample u[n] from the L preceding samples u[n 1], . . . , u[n L] using a linear combination:

    u[n] =L

    k=1

    wku[n k].

    The prediction error is given by e[n] = u[n] u[n] = v[n] L

    k=1 a

    ku[n k] L

    k=1 w

    ku[n k].Minimizing the mean-squared prediction error yields the proper predictor coefficients wk fork = 1, . . . , L. In the ideal case, the error is a minimum when only the non-predictable whitenoise excitation v[n] remains as e[n]. In this case, we obtain the (negative) AR coefficients:ak = wk for k = 1, . . . , L.

    For adaptive linear prediction (see Fig. 10), the adaptive transversal filter is the linearcombiner ( wk = ck1 for k = 1, . . . , L and u[n] = y[n]), and an adaptation algorithm (e.g., theLMS algorithm) is used to optimize the coefficients and to minimize the prediction error.

    5.3 Yule-Walker Equations

    In order to get the Wiener-Hopf solution for the MSE-optimal coefficient vector cMSE

    RxxcMSE = p,

    18

  • 7/28/2019 Handout Ws0809

    19/25

  • 7/28/2019 Handout Ws0809

    20/25

    2. LMS-adaptive transversal filter. Use your MATLAB implementation of the LMS algo-rithm according to Fig. 10. Take the coefficient vector from the last iteration to plot anestimate of the PSD function. Try different step-sizes .

    3. RLS-adaptive transversal filter. Use rls.m (download it from our web page). Try different

    forgetting factors .

    4. Welchs periodogram averaging method. Use the MATLAB function pwelch. Note, thisis a non-parametric method, i.e., there is no model assumption.

    Plot the different estimates into the same axis and compare them with the original PSD.

    MATLAB Exercise 5.2Periodic Interference Cancelation without an Exter-nal Reference Source: Simulate the scenario shown in Fig. 11. Take a speech signal asthe broadband signal. Try a delay around 10ms and an order of at least 100.

    5.6 Problems

    Problem 5.1Autocorrelation Sequence of an AR Process: Consider the followingdifference equation

    u[n] = v[n] + 0.5 u[n 1],

    i.e., a purely recursive linear system with input v[n] and output u[n]. v[n] are samples of astationary white noise process with zero mean and 2v = 1. Calculate the auto-correlationsequence ruu[k] of the output.

    Problem 5.2AR Modeling: Consider a linear prediction scenario as shown in Fig. 10.The mean squared error should be used as the underlying cost function. The auto-correlationsequence of u[n] is given by

    ruu[k] = 4/3 (1/2)|k| .

    Compute the AR coefficients a1, a2, . . . and the variance of the white-noise excitation 2v . Start

    the calculation for an adaptive filter with 1 coefficient. Then, repeat the calculation for 2 and3 coefficients.

    Problem 5.3Periodic Interference Cancelation without an External ReferenceSource: Consider a measured signal u[n] that is the sum of a white-noise signal v[n] withvariance 2v = 1 and a sinusoidal: u[n] = v[n]+cos(/2 n + ) (with a random phase offset).(i) Calculate the auto-correlation sequence ruu[k] of u[n].(ii) Lets attenuate the sinusoidal using the setup of Fig. 11. A delay of 1 should be enoughfor the white v[n]. Compute the optimum coefficients c0, c1 of the first-order adaptive filter inthe sense of a minimum mean squared error.

    (iii) Determine the transfer function of the prediction-error filter, compute its poles and zeros,and plot the pole/zero diagram. Sketch its frequency response.

    6 Adaptive Equalization

    6.1 Principle

    Fig. 12 shows the principle of adaptive equalization. The goal is to adapt the transversal filterto obtain

    HChannel(z)HEqu(z) = z ,

    i.e., to find the inverse (except for a delay) of the transfer function of the unknown channel.

    In the case of a communication channel, this eliminates the intersymbol interference (ISI)introduced by the temporal dispersion of the channel.

    20

  • 7/28/2019 Handout Ws0809

    21/25

    signal

    source

    delay

    -adaptive

    equalizer

    channel

    noise

    unknown

    channel

    receiver,

    decision device,

    decoder, ...y[n]

    e[n]

    d[n]

    x[n]

    u[n]

    Figure 12: Adaptive equalization (or inverse modeling).

    Difficulties:

    1. Assume HChannel(z) has a finite impulse response (FIR) the inverse system H1Channel(z)

    is an IIR filter. We just can provide a finite-length adaptive transversal filter and thereforewe only get an approximation of the inverse system.

    2. Assume HChannel(z) is a non-minimum phase system (FIR or IIR) the inverse systemH1Channel(z) is not stable.

    3. We typically have to introduce the extra delay (i.e., the group delay of the cascade ofboth the channel and the equalizer).

    Situations where a reference, i.e., the original signal, is available to the adaptive filter:

    Audio: adaptive concert hall equalization, car HiFi, airplanes, . . . (equalizer = pre-emphasisor pre-distortion; microphone where optimum quality should be received)

    Modem: transmission of an initial training sequence to adapt the filter and/or periodicinterruption of the transmission to re-transmit a known sequence to re-adapt.

    Often there is no possibility to access the original signal. In this case we have to guess thereference: Blind Adaptation. Examples are Decision-Directed Learning or the Constant ModulusAlgorithm, which exploit a-priori knowledge of the source.

    6.2 Decision-Directed Learning

    For a digital transmission where we know the coding method (see Fig. 13), we can apply adistance measure from the received symbol to all possible symbols out of the coding alphabetand take the closest one (Decision Device). In the case of a simple quantization this is achievedby re-applying the quantizer to the sequence of received and equalized symbols.

    6.3 MATLAB Exercises

    MATLAB Exercise 6.1Inverse Modeling: Setup a simulation according to Fig. 12.Visualize the adaptation process by plotting the magnitude of the frequency response of the

    channel, the equalizer, and the overall system HChannel(z)HEqu(z).

    21

  • 7/28/2019 Handout Ws0809

    22/25

    -

    adaptive

    equalizer

    decision

    device

    soft decision hard decision

    y[n]

    e[n]

    d[n]

    x[n]

    Figure 13: Decision-directed adaptive channel equalizer.

    MATLAB Exercise 6.2Decision-Directed Channel Equalization: Simulate theequalization of a baseband transmission of a binary signal (possible symbols: 1 and +1). Plotbit-error graphs for the equalized and unequalized transmission (i.e, a stem plot that indicatesfor each symbol, whether it has been decoded correctly or not). Extend your program to add aninitialization phase for which a training sequence is available. After the training the equalizerswitches to decision-directed mode.

    6.4 Problems

    Problem 6.1ISI and Open-Eye Condition: For the following equivalent discrete-timechannel impulse responses

    (i) h = [0.8, 1, 0.8]T (ii) h = [0.4, 1, 0.4]T (iii) h = [0.5, 1, 0.5]T

    calculate the worst-case ISI for binary data u[n] {+1, 1}. Is the channels eye opened or

    closed?Problem 6.2Least-Squares and MinMSE Equalizer: For a noise-free channel withgiven impulse response h = [1, 2/3, 1/3]T, compute the optimum coefficients of the equal-length equalizer in the least-squares sense. Can the equalizer open the channels eye? Is theleast-squares solution equivalent to the minimum-MSE solution for white data u[n]?

    Problem 6.3MinMSE Equalizer for a Noisy Channel: Consider a channel withimpulse response h = [1, 2/3, 1/3]T and additive white noise [n] with zero mean and variance2. Compute the optimum coefficients of the equal-length equalizer in the sense of a minimummean squared error.

    6.5 More ProblemsAlternative Equalizer Structures

    Problem 6.4Decision-Feedback Equalizer (DFE): Consider the feedback-only equalizerin Fig. 14. (i) For a general channel impulse response h and a given delay , calculate theoptimum (min. MSE) coefficients cb of the feedback equalizer (use matrix/vector notation;the transmitted data u[n] is white and has zero mean). (ii) What is the resulting impulseresponse of the overall system (before the decision device) when the equalizer operates at itsoptimum? (iii) Do the MSE-optimum coefficients cb of the feedback equalizer change for anoisy channel? (iv) Extend the equalizer structure by an additional forward (or transversal)equalizer filter with coefficients cf right after the channel and derive the design equation forboth MSE-optimum cf and cb (use matrix/vector notation again; use an overall coefficient

    vector cT

    = [cT

    f cT

    b ]).

    22

  • 7/28/2019 Handout Ws0809

    23/25

    devicedecision

    channel

    equalizer

    decisiondirected

    training

    u[n] h

    cb z1

    e[n] u[n ]

    out[n]!

    = u[n ]

    Figure 14: Decision-feedback equalizer.

    Problem 6.5Fractionally-Spaced Equalizer: A fractionally-spaced equalizer runs ata sampling rate that is higher than the symbol rate. Consider the T /2-fractionally-spacedequalizer (i.e., it runs at the double rate) in Fig. 15 where T means the symbol interval. Thedecision device is synchronized with the sended symbols, which correspond to the even-indexedsamples at the double rate.

    EqualizerDecisionDevice

    Channel

    nT2 mT

    Figure 15: Fractionally-spaced equalizer.

    The discrete-time description of the channel for the high sampling rate is

    H(z) = h0 + h1z1 + h2z

    2 + h3z3 = 1/2 + z1 + 1/2 z2 + 1/4 z3,

    i.e., the unit delay z1 corresponds to T2 .(i) Calculate the coefficients of the equal-length equalizer

    C(z) = c0 + c1z1 + c2z

    2 + c3z3

    such that the cascade of the given channel and the equalizer H(z)C(z) = 1, i.e., it enables a

    delay-free and ISI-free transmission.(ii) Calculate the coefficients of the equalizer such that the cascade is a pure delay of 1 symbol,i.e., H(z)C(z) = z2.(iii) Consider the channel to be noisy (additive white noise). Compute the noise gains of thetwo equalizers of the previous tasks. Which one should be chosen?(iv) Let the channel be

    H(z) = 1 + 1/2 z1 + 1/4 z2 + 1/8 z3.

    Compute again the coefficients of an equal-length equalizer.

    23

  • 7/28/2019 Handout Ws0809

    24/25

  • 7/28/2019 Handout Ws0809

    25/25

    References

    [1] Simon Haykin: Adaptive Filter Theory, Fourth Edition, Prentice-Hall, Inc., Upper SaddleRiver, NJ, 2002.

    [2] George Moschytz and Markus Hofbauer: Adaptive Filter, Springer-Verlag, Berlin Heidel-berg, 2000.

    [3] Gernot Kubin: Joint Recursive OptimalityA Non-Probabilistic Approach to Joint Re-cursive OptimalityA Non-Probabilistic Approach to, Journal Computers and ElectricalEngineering, Vol. 18, No. 3/4, pp. 277289, 1992.

    [4] Bernard Widrow and Samuel D. Stearns: Adaptive Signal Processing, Prentice-Hall, Inc.,Upper Saddle River, NJ, 1985.

    [5] Edward A. Lee and David G. Messerschmitt: Digital Communication, Third Edition,Kluwer Academic Publishers, 2004.

    [6] Steven M. Kay: Fundamentals of Statistical Signal ProcessingEstimation Theory, Vol-ume 1, Prentice-Hall, Inc., 1993.

    [7] Steven M. Kay: Fundamentals of Statistical Signal ProcessingDetection Theory, Vol-ume 2, Prentice-Hall, Inc., 1998.