Introduction

31
Contents 1 Introduction to Adaptive Filters 2 1.1 Motivation for Adaptive Filtering ......... 4 1.2 Overview of Adaptive Filtering Principles ..... 7 1.2.1 Advantages and Features of Adaptive Filters 8 1.2.2 Adaptive Filter Structures ......... 11 1.2.3 Adaptation Approaches .......... 17 1.3 Applications of Adaptive Filters .......... 19 1.3.1 Modelling or System Identification .... 19 1.3.2 Inverse Modelling ............. 21 1.3.3 Linear Prediction ............. 22 1.3.4 Interference Cancellation ......... 23 1.4 Concluding Remarks ............... 25 1.5 Learning Objectives of this Module ........ 26 1.6 Conventions and Notations ............ 28 1

description

adaptive filtering

Transcript of Introduction

  • Contents

    1 Introduction to Adaptive Filters 21.1 Motivation for Adaptive Filtering . . . . . . . . . 4

    1.2 Overview of Adaptive Filtering Principles . . . . . 7

    1.2.1 Advantages and Features of Adaptive Filters 8

    1.2.2 Adaptive Filter Structures . . . . . . . . . 11

    1.2.3 Adaptation Approaches . . . . . . . . . . 17

    1.3 Applications of Adaptive Filters . . . . . . . . . . 19

    1.3.1 Modelling or System Identification . . . . 19

    1.3.2 Inverse Modelling . . . . . . . . . . . . . 21

    1.3.3 Linear Prediction . . . . . . . . . . . . . 22

    1.3.4 Interference Cancellation . . . . . . . . . 23

    1.4 Concluding Remarks . . . . . . . . . . . . . . . 25

    1.5 Learning Objectives of this Module . . . . . . . . 26

    1.6 Conventions and Notations . . . . . . . . . . . . 28

    1

  • Chapter 1

    Introduction to Adaptive Filters

    In this chapter, a brief and introductory overview on the topic

    of adaptive filtering is given. In the context of signal processing,

    filter is a device that is designed to process the signals at its

    input in a specified manner and generate output signals that

    meet certain well defined objectives. For example, a filter may

    be designed

    to extract information about specific quantities of interestfrom noisy data (e.g. enhancing a desired signal buried

    in noise, recovering the transmitted data bits from received

    noisy signal etc),

    to aid in tracking dynamic physical processes (e.g. missileguidance, jamming the radar of an enemy vehicle etc),

    to aid in medical diagnosis by analyzing the measured bio-medical signals,

    to find efficient representation of signals (e.g. compressionof speech and image data), etc.

    2

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 3

    In general, adaptive filtering refers to the process by which

    a filter tries to adjust its parameters so as to respond, with some

    specified objectives, to the changes that are taking place in its

    surroundings. The changes and/or state of the surroundings may

    or may not be directly measurable. That is why an adaptive

    filter is employed to learn these changes. Therefore, the study of

    adaptive filters involves several aspects as listed below.

    Filter Structure: The type of filters used (e.g. non-recursivefilters, recursive filters etc.)

    Task of the Filter: The function to be fulfilled by the filter(e.g. channel equalization, channel identification, interefer-

    ence cancellation etc.)

    Adaptation Algorithm: The steps specified for doing theadaptation

    Performance Measures: The measures that can be used toassess the quality of the adaptation process (e.g. speed of

    convergence, accuracy etc.)

    Analysis: Theoretical analysis of the adaptation process andperformance so as to aid in the selection as well as design of

    the adaptive algorithm.

    In this module on adaptive signal processing, we will touch upon

    all the above aspects and the application of adaptive filters to

    real-world problems.

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 4

    ( )H z( )s n

    ( )y n( )W z( )x n

    Equalizer Channel

    ( )v n( )s n

    Threshold detector

    Figure 1.1: A baseband transmission and reception system with fixedequalizer.

    1.1 Motivation for Adaptive Filtering

    Consider the baseband data transmission and reception system

    shown in Figure 1.1. Here, the input s(n) denotes the trans-

    mitted data symbols, H(z) denotes the channel transfer func-

    tion and v(n) denotes the channel noise. A simple reception

    scheme, consisting of an equalizer W (z) and a threshold detec-

    tor, is considered. As the transmitted symbols pass through the

    channel, they are subjected to several distortions. The received

    signal x(n) is a noisy version of the distorted symbols. The role

    of the equalizer is to compensate for the distortions introduced

    by the channel and noise on the data symbols, in such a way that

    the threshold detector should be able to satisfactorily recover the

    data symbols.

    To design an equalizer to achieve the above objective, we

    could resort to a one-time effort by setting up the mean squared

    error (MSE) at the equalizer output as a cost function and solving

    for the optimum equalizer that minimizes the MSE. However, as

    we will see later, the equation for the optimum equalizer involves

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 5

    certain statistics (e.g. auto-correlation and cross-correlation) of

    the underlying signals. To compute these statistics, we need to

    know the channel characteristics. If these quantities are not avail-

    able, then we will need to use time-averaging on the measured

    signals (i.e. channel input, equalizer input, desired signal) to

    estimate the required statistics. This can be very costly in terms

    of computations, memory and time. Furthermore, if the channel

    characteristics are time-varying, then we will need to repeat the

    above computations very often so as to respond to the changes

    in the channel characteristics. On the other hand, the adaptive

    equalization approach offers a very attractive alternative way for

    getting the (near) optimum filter settings.

    ( ) ( )= d n s n

    ( )e n

    ( )H z( )s n ( )y n( )W z( )x nEqualizer Channel

    ( )v n( )s n

    Threshold detector

    Training sequence

    Figure 1.2: A baseband transmission and reception system with adaptiveequalizer.

    The adaptive equalization set-up is shown in Figure 1.2. A

    training sequence, which is also known to the receiver, is usu-

    ally transmitted at the start of transmission. Ideally, the equal-

    izer output is expected to be equal to the transmitted symbols.

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 6

    Because the training sequence is known to the receiver, we can

    generate an error signal that indicates the amount of discrepancy

    between the actual output of the equalizer and the desired out-

    put. This error signal can in turn be used to adjust (or, adapt)

    the equalizer parameters to reduce the error in some specified

    sense (e.g. MSE). At the end of the training mode, the equal-

    izer parameters would have converged close to their optimal val-

    ues. At this time, the detected symbols (or, decisions) at the

    threshold detector output can be considered very reliable. From

    then onwards, the decisions can be used to continue adapting

    the equalizer. This is called the decision directed mode. This is

    helpful to fine-tune the equalizer and to track slow changes in

    the channel characteristics.

    In the above process of adapting the equalizer parameters,

    what is equivalently happening is that we are indirectly acquiring

    some knowledge about the channel characteristics. This is the

    main idea behind adaptive filtering, i.e. to learn about a system

    by observing the changes taking place in the response of the

    system to known (partially or fully) inputs and then to reflect

    this knowledge in adjusting the parameters of a filter to achieve

    certain desired objectives.

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 7

    ( )e n

    Adaptive filter

    ( )x n ( )y n( )d n

    input signal output signal

    desired signal

    error signal

    Figure 1.3: Schematic diagram showing the main components of an adap-tive filtering set-up.

    1.2 Overview of Adaptive Filtering Principles

    The main principles of adaptive filtering can be described by con-

    sidering the schematic shown in Figure 1.3. In a nut-shell, the

    purpose of the adaptive filter is to produce an output signal,

    based on a given input signal, that very closely matches a spec-

    ified desired signal. The match between the desired signal and

    filter output is quantified by a performance function (also called,

    cost function) defined using the error signal. An adaptation

    algorithm, which adjusts the filter coefficients, is developed to

    optimize the given performance function in an iterative manner,

    e.g. minimize the mean square value of the error. The mean

    square error (MSE) is one of the most widely used performance

    functions.

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 8

    1.2.1 Advantages and Features of Adaptive Filters

    The example of adaptive equalization discussed in Section 1.1

    answers the following questions:

    i) Why choose the adaptive approach ?

    ii) Why do not we solve for the optimum filter coefficients by a

    one-time optimization of the selected cost function ?

    In an attempt to give a more complete picture of adaptive filters,

    we list below some of their advantages and features.

    Advantages of Adaptive Filters:1. Memory and Computational Requirements: Estimation of

    the required statistics for direct computation of the optimum

    filter parameters requires accumulation of large amounts of

    signal samples and doing averaging on these large data se-

    quences. Further, this approach also results in large delay in

    the filter output.

    On the other hand, adaptive approaches do not need to

    accumulate signal samples, thereby resulting in large savings

    in memory and computational costs. Furthermore, they do

    not introduce any significant delay in the filter output.

    2. Self-Optimization: Adaptive filters are self-optimizing sys-

    tems. That is, they can adapt themselves to an existing

    environment to achieve near-optimum performance.

    3. Time-Varying Environment: Adaptive filters can adapt their

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 9

    parameters to provide near-optimum performance even when

    the environment parameters are (slowly) time-varying.

    4. Graceful Degradation: Adaptive filters can repair themselves

    to certain extent. That is, if some parts of an adaptive

    filter are damaged, the remaining parts can (to some extent)

    automatically compensate for the damaged parts. In other

    words, they exhibit graceful degradation in performance.

    On ther other hand, non-adaptive systems almost collapse if

    some parts are damaged.

    5. Ease of Design: Adaptive filters help to relieve the designer

    from having to collect accurate knowledge of the environ-

    ment (e.g. channel characteristics).

    6. Implementation: Adaptive algorithms are usually much sim-

    pler to code in software and DSP (digital signal processing)

    processors, or to implement in hardware, compared to non-

    adaptive approaches.

    General Features of Adaptive Filters:1. Adaptive filters require an initial training effort. However,

    there are also blind adaptive algorithms that relax this re-

    quirement to varying degrees1.1In digital communication systems, the study of blind adaptation

    approaches is receiving more attention currently since blind schemesresult in efficient use of the available bandwidth for transmitting theuser data (rather than training+user data).

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 10

    2. Strictly speaking, adaptive filters are non-linear because the

    filter parameters are data-dependent. As a result, an adap-

    tive filter does not obey the principle of superposition even

    if the filter structure is that of a linear filter.

    3. Outputs of adaptive filters are non-stationary in nature since

    the filter parameters are changing with time.

    Selection of Adaptive Algorithms: In the literature, there ex-ists a wide variety of adaptive algorithms. For any particular

    application, we need to wisely choose the adaptive algorithm to

    use. In general, examining the following factors help to decide

    on the choice of one algorithm above another.

    1. Convergence speed: It is the number of iterations required

    for the algorithm to converge close enough to the optimum

    filter parameters.

    2. Mis-adjustment: It is a quantitative measure of the amount

    by which the final value of the MSE deviates from the mini-

    mum MSE.

    3. Tracking: It is the ability of the algorithm to track varia-

    tions in the environment. Tracking performance involves the

    trade-off between convergence speed and accuracy.

    4. Robustness: It refers to the ability of the algorithm to op-

    erate satisfactorily in the presence of small disturbances (ex-

    ternal or internal).

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 11

    5. Complexity: It refers to the number of computations and

    memory elements required for completing each iteration of

    the adaptive algorithm.

    6. Structure: This refers to the structure of information flow in

    the algorithm. This has great impact on efficient hardware

    implementation and achievable speed of adaptive algorithms.

    7. Numerical Properties: This refers to the behaviour of the

    algorithm under finite-precision implementation (i.e. quan-

    tization erors) and ill-conditioned input data. We prefer to

    have a numerically robust adaptive algorithm.

    1.2.2 Adaptive Filter Structures

    In signal processing applications, the most commonly used struc-

    tures for implementing digital (discrete-time) adaptive filtering

    are transversal filter, linear combiner, lattice predictor, and re-

    cursive filter.

    Transversal Filter: A transversal filter with N coefficients(taps or weights) is shown in Figure 1.4. The input x(n) passes

    through a series of identical delay elements whose delay is matched

    to the arrival rate of the input samples. The filter output y(n) is

    a linear combination of the outputs of the delay elements, and is

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 12

    1z

    ( )x n

    ( )y n

    0w

    1w

    1z1z

    1Nw

    2w

    ( 1)x n ( 1)x n N +( 2)x n

    Figure 1.4: A transversal filter (or, tapped-delay line filter) with N taps.

    given by2

    y(n) =N1i=0

    wix(n i).For this reason, it is also known as a tapped-delay line filter.

    The transfer function of this filter is given by

    H(z) =Y (z)

    X(z)=

    N1i=0

    wizi

    where Y (z) and X(z) are the z-transforms of y(n) and

    x(n), respectively.

    Linear Combiner: In some applications (e.g. beamforming),the tap inputs may not be delayed samples of a single input. In

    such cases, the linear combiner structure shown in Figure 1.5 is2In adaptive filters, the filter coefficients should be denoted as

    w0(n), w1(n), , wN1(n) to express the fact that the coefficients arechanging with time due to adaptation. For the sake of convenience, wedo not explicitly show this in the filter structures shown in Figures 1.4to 1.7.

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 13

    ( )y n

    0( )x n

    0w

    1w

    1Nw

    2w

    1( )x n 1( )Nx n2( )x n

    Figure 1.5: A linear combiner with N taps.

    used. Its output y(n) is a linear combination of the different

    input signals, and is given by

    y(n) =N1i=0

    wixi(n).

    Note that the linear combiner becomes a transversal filter if we

    choose xi(n) = x(n i), i = 0, 1, . . . , N 1. The compu-tational complexities of linear combiner and transversal filter are

    equal and is given by N multiplications and N 1 additions.

    ( )Nf n

    ( )Nb n 1( )Nb n

    1( )Nf n 2( )f n

    1( )b n

    1( )f n

    0( )b n

    ( )x n

    0( )f n 1

    2 N

    2( )b n

    Figure 1.6: A lattice predictor of order N .

    Lattice Predictor: The lattice predictor is known for its mod-ular structure. A lattice predictor of order N consists of

    N stages as shown in Figure 1.6. Each of the stages has two

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 14

    inputs and two outputs. The mth stage is described by a pair

    of input-output relations given by

    fm(n) = fm1(n) mbm1(n 1)bm(n) = bm1(n 1) mfm1(n)

    for m = 1, 2, . . . , N , with f0(n) = b0(n) = x(n). Its compu-

    tational complexity is 2N multiplications and 2N additions.

    ( )x n

    1z

    ( 1)x n1z

    ( 2)x n

    1z

    ( )b

    x n N

    1b

    bN

    b

    2b

    ( )y n

    1z

    1z

    ( 1)y n1a

    1z

    ( 2)y n2a

    aNa

    ( )ay n N

    0b

    Figure 1.7: The structure of an IIR filter with Nb zeros and Na poles.

    Recursive Filter: The structures in Figures 1.4, 1.5 and 1.6correspond to non-recursive filters, i.e. computation of the filter

    output does not involve any feedback or recursive mechanism.

    Therefore, they are also called finite impulse response (FIR) fil-

    ters since their impulse responses are of finite duration in time.

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 15

    Further, the number of coefficients in these filters specify the

    effective length of the impulse response of these filters. Conse-

    quently, it is impossible to use these filters to implement impulse

    responses that are very long, due to complexity.

    On the other hand, impulse responses of infinite duration

    can be implemented using filters with finite number of parame-

    ters by incorporating feedback in the computation of the filter

    output. The structure of the resulting filter, called infinite im-

    pulse response (IIR) filter, is shown in Figure 1.7. Its output is

    given by the recursive equation

    y(n) =Nbi=0

    bix(n i) +Nai=1

    aiy(n i)

    where a1, a2, , aNa and b0, b1, , bNb are coefficients ofthe feedback and feedforward paths, respectively. The transfer

    function of this filter is given by

    H(z) =Y (z)

    X(z)=

    Nbi=0

    bizi

    1Nai=1

    aizi.

    Thus, H(z) has Nb zeros (i.e. roots of the numerator) and

    Na poles (i.e. roots of the denominator). One of the main

    advantages of IIR filters is the computational simplicity in im-

    plementing infinite (very long) impulse responses. Its computa-

    tional complexity is Na+Nb+1 multiplications and Na+Nb ad-

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 16

    ditions. The IIR structure becomes a FIR structure if Na = 0.

    Unlike FIR filters, stability is an important concern in IIR

    filters. This makes adaptation of IIR filters more difficult com-

    pared to FIR filters. To gurantee stability, all the poles of the

    filter must lie within the unit circle |z| = 1. Another difficultyassociated with adaptive IIR filters is that the MSE performance

    function associated with these filters may have local optima due

    to their recursive nature.

    Systolic Array: The systolic array structure is developed forparallel computing applications, and is suitable for implement-

    ing linear algebraic computations such as matrix multiplication,

    triangularization, matrix transformation etc. The use of systolic

    arrays help to achieve high throughput rates, which are required

    in several real-time applications of advanced signal processing.

    Non-linear Filters: The filter structures considered above arelinear in nature. In situations where the environment or system to

    be modeled behaves non-linearly, we need to use non-linear filter

    structures. Volterra filters and neural networks are examples of

    non-linear filter structures.

    Real and Complex Filters: In some applications, the filterinput and desired signal are complex-valued rather than real-

    valued. For example, the baseband signal corresponding to a

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 17

    QAM (quadrature amplitude modulation) modulated signal is

    complex-valued. In processing such signals, we need to use filters

    with complex-valued coefficients.

    1.2.3 Adaptation Approaches

    The various adaptation approaches can be broadly classified to

    belong to either stochastic or deterministic frameworks depend-

    ing upon if the underlying cost function is a statistical average

    or time-average. The most popularly used cost function in the

    stochastic framework is mean square error (MSE) and the re-

    sulting filters are called Wiener filters. The most popularly used

    cost function in the deterministic framework is the weighted sum

    of squared errors and the resulting approaches are called least

    squares approaches (LS).

    Wiener Filter Approach: The optimum coefficients of theWiener filter are obtained by minimizing the MSE cost function.

    Clearly, construction and minimization of this cost function re-

    quires certain statistics which need to be obtained by ensemble

    averaging. The adaptive approach to Wiener filters is to replace

    these statistics by easily obtainable estimates. For example, the

    well known LMS (least mean square) adaptive algorithm is ob-

    tained when the MSE E[e2(n)] (E[] denotes statistical expec-tation operator) is replaced by the instantaneous squared error

    e2(n).

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 18

    Transform domain adaptive filtering approaches help toachieve faster convergence speed. The adaptation is done in

    transform domain, which is so selected as to result in faster

    convergence of the algorithm in this domain compared to

    regular time domain implementation.

    Block adaptation approaches help to achieve reduction incomputational requirements when the number of filter coef-

    ficients is very large. The adaptation is done only once every

    block. By making use of FFT (fast Fourier transform), this

    can be done efficiently.

    Least Squares Approach: In this case, the optimum filter co-efficients are obtained by minimizing the sum of weighted squared

    errors for a given data. This deterministic approach results in al-

    gorithms that may, in general, converge faster than MSE-based

    algorithms. However, these algorithms are generally more com-

    putationally demanding and numerically sensitive compared to

    the MSE-based algorithms. There are three classes of recursive

    least squares (RLS) adaptive algorithms.

    Standard RLS Algorithm: Its complexity is proportional toN 2 where N is the number of filter coefficients.

    QR-Decomposition based RLS Algorithm: This version ismore robust to numerical errors. Its complexity is also pro-

    portional to N 2. It can be implemented using systolic

    arrays which are suitable for parallel computing.

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 19

    Fast RLS Algorithms: The complexity of this family of al-gorithms is proportional to N .

    1.3 Applications of Adaptive Filters

    Adaptive filtering has been successfully applied in several diverse

    fields such as communications, control, speech processing, bio-

    medical engineering, data storage etc. In all these applications,

    the role of the adaptive filter is to filter a given input signal to

    match a specified desired signal. An essential difference between

    the various applications is the manner in which the desired signal

    is obtained. The functions of the four basic classes of adap-

    tive filtering applications are modelling or system identification,

    inverse modelling, linear prediction and interference cancellation.

    1.3.1 Modelling or System Identification

    In modelling applications, the task of the adaptive filter is to

    provide a model that represents the best fit to an unknown plant.

    This is illustrated in Figure 1.8. Usually, the plant output is a

    noisy signal. The task of the adaptive filter is to learn the transfer

    function of the plant.

    Typical applications of modelling are self-tuning regulators in

    control systems and adaptive channel identification in commu-

    nication or storage systems. Figure 1.9 illustrates the self-tuning

    regulator. The adaptive filter assumes the role of Plant Model

    in this figure. The identified model parameters are used to mod-

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 20

    ( )d n

    ( )e n

    ( )y nAdaptive

    Filter

    ( )x nPlant

    Figure 1.8: Adaptive system modelling.

    ( )d n

    ( )x n

    Design

    ( )u n

    Regulator Parameters

    Plant Model

    Regulator Plant

    Model Parameters

    Figure 1.9: Applications of adaptive system modelling: Self-tuning regu-lator.

    ify the control signal for the plant. Figure 1.10 illustrates the

    adaptive channel identification system. The extracted channel

    parameters can be used to design the equalizer and detector. As

    mentioned earlier, after the initial training mode, the adaptation

    can continue by replacing the training sequence with the symbol

    decisions from the output of the detector.

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 21

    ( )e n

    ( )G z( )s n

    ( )y n( )W z

    ( )d nChannel

    ( )v n( )s n

    Training sequence

    Channel model

    Equalizer &

    Detector

    Channel Parameters

    Figure 1.10: Applications of adaptive system modelling: Adaptive channelidentification.

    1.3.2 Inverse Modelling

    In inverse modelling applications, the task of the adaptive filter

    is to provide an inverse model of an unknown plant. This is

    illustrated in Figure 1.11.

    ( )d n ( )e n

    ( )y nAdaptive Filter

    ( )x nPlant

    Delay

    ( )s n

    Figure 1.11: Adaptive inverse system modelling.

    A typical application of inverse modelling is adaptive chan-

    nel equalization in communication or storage systems. We dis-

    cussed this in Section 1.1 and is illustrated in Figure 1.2. The

    adaptive filter models inverse of the channel transfer function

    while minimizing the channel noise. Again, after the initial train-

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 22

    ing mode, the adaptation can continue by replacing the training

    sequence with the symbol decisions from the detector output.

    ( )d n

    ( )e n

    ( )y nAdaptive Filter

    ( )x nDelay

    ( )u n

    Figure 1.12: Adaptive linear prediction.

    1.3.3 Linear Prediction

    In linear prediction applications, the adaptive filter is meant to

    provide the best prediction of the current sample of a random

    signal using its past samples. This is illustrated in Figure 1.12.

    The adaptive filter output is the prediction estimate of the sig-

    nal u(n). Consequently, the error signal e(n) is the unpre-

    dictable part of u(n). This, coupled with the concept of au-

    toregressive modelling, can be used for finding parametric repre-

    sentations of correlated random processes.

    Typical applications of linear prediction are in speech process-

    ing, signal detection and spectral estimation. For the sake of

    illustration, we consider the adaptive line enhancement (ALE)

    problem which is an example of adaptive signal detection. The

    objective of ALE is to extract a narrowband signal from a wide-

    band interference or vice-versa. An example of ALE is to clean up

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 23

    the measured bio-medical signal from the 50/60 Hz power-line

    interference. The required system set-up will look exactly similar

    to Figure 1.12 with u(n) being the bio-medical signal mixed with

    power-line interference. Because narrowband signals are highly

    correlated processes compared to wideband signals, output of the

    adaptive filter (or, adaptive predictor) will be a good estimation

    of the narrowband component in u(n). Consequently, the error

    signal e(n) will represent the cleaned-up bio-medical signal.

    Primary input

    Reference input

    ( )d n

    ( )e n( )y nAdaptive Filter

    ( )x nInterf- erence

    desired signal

    Figure 1.13: Adaptive interference cancellation.

    1.3.4 Interference Cancellation

    In interference cancellation applications, the adaptive filter is to

    cancel unknown interferences contained in a primary signal. The

    primary signal is a mixture of the desired signal and the interfer-

    ences. A reference signal is applied to the adaptive filter input

    and this signal is supposed to be very weakly correlated with the

    desired signal component in the primary signal and highly cor-

    related with the interference component. This is illustrated in

    Figure 1.13. Thus, the output of the adaptive filter becomes a

    good estimate of the interference. Subtracting this estimated in-

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 24

    terference y(n) from the primary input, we get the error signal

    e(n) as the cleaned-up desired signal.

    reflections Microphone

    Loudspeaker

    Person A

    Conference room

    ( )d n

    ( )e n( )y nAdaptive

    Filter

    transmit signal

    receive signal

    noise

    Figure 1.14: Application of adaptive interference cancellation for acousticecho cancellation in teleconferencing systems.

    Typical applications of interference cancellation are in echo

    cancellation in telephone networks, active noise control and

    beamforming in communications. The application of acoutic

    echo cancellation in teleconferencing systems is illustrated in Fig-

    ure 1.14. The adaptive filter plays the role of creating a replica

    of the echo of picked up by the microphone. It does this by us-

    ing the speech signal from the far-end speaker as the reference

    signal.

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 25

    1.4 Concluding Remarks

    Throughout this module, we will be using non-recursive filterstructures for our study of adaptive algorithms.

    Our studies will be mainly focused on real-valued systems.For the sake of generality and completion, we will give ex-

    tensions to complex-valued systems.

    We will not consider finite-precision effects and details ofhardware implementation aspects in this module.

    Our study will be mainly centered around stationary signalscenarios. Non-stationary signal scenarios and tracking is-

    sues will not be discussed in detail.

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 26

    1.5 Learning Objectives of this Module

    Formulation, design and development, and analysis ofadaptive approaches for practical problems.

    Principles of iterative/adaptive approaches for optimizationof nonlinear functions:

    Various algorithms (e.g. steepest descent, LMS, New-

    ton, RLS)

    Stochastic and deterministic approaches to optimum es-

    timation

    Simplified approaches to adaptive filtering

    Adaptive filtering in the presence of constraints.

    Analysis of iterative/adaptive algorithms: convergence behaviour (speed, stability)

    accuracy

    complexity

    Quantification of the goodness of adaptive algorithms.

    Influence of signal statistics on the performance of the algo-rithm.

    Influence of applications (e.g. modelling, inverse modellingetc.) on the performance of the algorithm.

    Approaches to improve convergence of adaptive algorithms(e.g. transform domain adaptive filtering).

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 27

    Properties of transversal and lattice filter structures. Parametric modelling of random processes. Mathematical tools to help in algorithm developmentand analysis.

    Strengthen the fundamentals of signals and systems. Acquire deeper insights on digital signal processing method-ologies, interpretation of signals etc.

    Translation of mathematical results into physicalinterpretation in the study of signals and systems.

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 28

    1.6 Conventions and Notations

    Boldface uppercase letters denote matrices and boldface low-ercase letters denote vectors. Non-boldface lowercase letters

    denote scalar quantities.

    Superscripts H, T and denote Hermitian (conjugate)transposition, ordinary transposition, and complex conjuga-

    tion, respectively.

    Unless otherwise mentioned, the signals and systems are as-sumed to be real-valued.

    E[x] denotes expectation of x and |x| denotes magnitudeof x. The operator z1 denotes one sample delay.

    IM denotes the MM identity matrix. Boldface 0 denotesa vector or matrix of zeros, with appropriate dimensions.

    Let A be a M M matrix with its (i, j)th elementgiven by Ai,j for i, j = 0, 1, . . . ,M 1. Then i) tr(A)stands for trace of A and is equal to

    M1i=0

    Ai,i, ii) diag(A)

    stands for diagonal of A and it results in a diagonal matrix

    whose main diagonal is same as the main diagonal of A, and

    iii) det(A) stands for determinant of A. If x is a M 1vector, then diag(x) results in a M M diagonal matrixwhose main diagonal is equal to x.

    All the signals and systems considered here are discrete-timein nature. The signal samples are assumed to be at the

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 29

    normalized sampling period of 1 second, i.e. the sampling

    frequency is 1Hz.

    The integers k and n are used to denote iteration indicesand time indices, respectively. The time dependence of a

    variable x is represented by x(n), meaning the value of

    x at nth instant.

    Impulse responses of filters are represented using subscriptedvariables. For example, hk denotes the value of the k

    th co-

    efficient (or weight, tap) in the discrete-time impulse re-

    sponse of a filter. These coefficients are also assumed to be

    at the spacing of 1 second.

    The autocorrelation of a random process {x(n)} is definedas xx(m) = E [x(n)x

    (nm)] for m = 0,1,2, . . ..All the random processes are assumed to be zero-mean and

    wide-sense stationary, unless otherwise stated explicitly.

    Similarly, cross-correlation between random processes {x(n)}and {y(n)} is defined as xy(m) = E [x(n)y(nm)]for m = 0,1,2, . . ..

    The random processes {x(n)} and {d(n)} (i.e. the tapinputs and desired signal, respectively) are assumed to be

    individually and jointly wide-sense stationary.

    The power spectral density (PSD) of {x(n)} is defined asxx

    (ej

    )=

    m=

    xx(m)ejm, where is the normalized

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 30

    frequency variable in radians (i.e. = 2piFT = 2pif and

    f = FT where F is the unnormalized frequency in Hz,

    T is the sampling period in Seconds, and f is the normal-

    ized frequency).

    Similarly, the cross-spectral density between {x(n)} and {y(n)}is defined as xy

    (ej

    )=

    m=

    xy(m)ejm.

    Note: Bandwidth for real and complex signals.

    Output of a filter with impulse response hi, i = 0, 1, . . . , N1, and input x(n) is computed as

    y(n) = h0x(n) + h1x(n 1) + + hN1x(nN + 1).In vector notation, this can be written as y(n) = hTx(n)

    where the N 1 vectors h and x(n) are given byh = [h0, h1, , hN1]T

    and x(n) = [x(n), x(n 1), , x(nN + 1)]T ,respectively.

    Let a = bHc where a is a complex scalar and b and care complex vectors. Then, a = bTc = cHb and |a|2 =bHc2 = bHc cHb = bH (ccH)b.

    Let R denote the N N correlation matrix of the signal{x(n)}. Further, let q0,q1, ,qN1 be the N ortho-normal eigenvectors of R corresponding to the eigenvalues

    0 1 N1. Then, its eigen (or, spectral)

  • EE5301 Chap -1: Introduction to Adaptive Filters; S. Puthusserypady 31

    decomposition is given by

    R = QQT =N1i=0

    iqiqTi (1.1)

    where Q = [q0,q1, ,qN1] is a unitary matrix and =diag [0, 1, , N1] is a diagonal matrix.

    The trace of R is given by the sum of its eigenvalues:tr (R) = Nxx(0) =

    N1i=0

    i, (1.2)

    where xx(0) = E[x2(n)

    ]is the diagonal element of R.

    The determinant of R is given by the product of its eigen-values:

    det (R) =N1i=0

    i. (1.3)

    For any N N matrices A and B, we havedet (AB) = det(A) det(B). (1.4)

    For any M N matrix A and N M matrix B, wehave

    tr (AB) = tr(BA) (1.5)

    (AB)T = BTAT (1.6)

    (AB)H = BHAH (1.7)

    (AB) = AB. (1.8)

    For any N N invertible matrices A and B, we have(AB)1 = B1A1. (1.9)