IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8 ...hrao/Journal09.pdf · IEEE TRANSACTIONS...

12
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009 2919 Fast LMS/Newton Algorithms for Stereophonic Acoustic Echo Cancelation Harsha I. K. Rao, Student Member, IEEE, and Behrouz Farhang-Boroujeny, Senior Member, IEEE Abstract—This paper presents a new class of adaptive filtering algorithms to solve the stereophonic acoustic echo cancelation (AEC) problem in teleconferencing systems. While stereophonic AEC may be seen as a simple generalization of the well-known single-channel AEC, it is a fundamentally far more complex and challenging problem to solve. The main reason being the strong cross correlation that exists between the two input audio chan- nels. In the past, nonlinearities have been introduced to reduce this correlation. However, nonlinearities bring with it additional harmonics that are undesirable. We propose an elegant linear technique to decorrelate the two-channel input signals and thus avoid the undesirable nonlinear distortions. We derive two low complexity adaptive algorithms based on the two-channel gradient lattice algorithm. The models assume the input sequences to the adaptive filters to be autoregressive (AR) processes whose orders are much lower than the lengths of the adaptive filters. This results in an algorithm, whose complexity is only slightly higher than the normalized least-mean-square (NLMS) algorithm; the simplest adaptive filtering method. Simulation results show that the proposed algorithms perform favorably when compared with the state-of-the-art algorithms. Index Terms—Adaptive filters, lattice orthogonalization, LMS/ Newton, stereo acoustic echo cancellation (AEC). I. INTRODUCTION T HE past few years have witnessed the use of multichannel audio in teleconferencing systems. In particular, stereo- phonic systems are desirable as they provide the listener with spatial information to help distinguish possibly simultaneous talkers [1]. Acoustic echo cancelers are a necessary component of such teleconferencing systems as they remove the undesired echoes that result from the coupling between the microphone and the loudspeakers [2]–[4]. This work proposes a new class of adaptive algorithms to solve the stereophonic acoustic echo cancelation (AEC) problem. The setup of a typical stereophonic acoustic echo canceler as it exists in a teleconferencing system is shown in Fig. 1 [1], [5]. A transmission room is shown on the left, wherein two micro- phones are used to pick up the signals from a source via two acoustic channels characterized by the room impulse responses and . The stereophonic signals are transmitted to the loudspeakers in the receiving room. These loudspeakers are Manuscript received October 17, 2008; accepted March 07, 2009. First pub- lished April 07, 2009; current version published July 15, 2009. The associate editor coordinating the review of this manuscript and approving it for publica- tion was Prof. Jonathon A. Chambers. The authors are with the Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT 84112 USA (e-mail: [email protected]; [email protected]). Digital Object Identifier 10.1109/TSP.2009.2020356 coupled to one of the microphones via the acoustic channels de- noted by and . A conventional acoustic echo can- celer will try to model the acoustic paths in the receiving room using two finite-impulse response (FIR) adaptive filters, and . If denotes the echo picked up by the micro- phone, then the two adaptive filters will use the input signals and to produce an estimate of represented by . The difference between and should produce a residual echo signal close to zero. While this may seem like a straightforward extension of the single-channel AEC, we may note that the inputs and are derived from the same source and hence are highly correlated. The strong cross correlation between and can create problems in implementing adaptive algorithms. The fundamental problem of stereophonic AEC was first ad- dressed in [1] and more insight on this problem has been pro- vided in [5]. An important result shown in [5] was that if de- notes the length of far-end room echo path, is the length of the modeling adaptive filters and is the length of the near-end room echo path, then for , there exists a unique solu- tion. It was also shown that for , a misalignment will exist for both the single-channel/monophonic case as well as the stereophonic system. However, the problem is much greater in the two-channel setup due to the strong cross-correlation ef- fects. Also, the use of nonlinearities to decorrelate the input sig- nals was first proposed in [5] and further investigated in [6]. While the various versions of recursive least-squares (RLS) al- gorithm [7], [8] can provide excellent echo cancellation, the computational requirements for their implementation prevents them from being practical algorithms. Moreover, the strong cor- relation among the signals in the two channels results in a highly ill-conditioned covariance matrix and this in turn makes RLS al- gorithms sensitive to numerical errors. A leaky extended least- mean-square (XLMS) algorithm was proposed in [9] that aims to reduce the interchannel correlation without the addition of nonlinearities, hence, the quality and perception of speech sig- nals remains unaffected. The leaky XLMS algorithm was shown to perform satisfactorily at the expense of multiplications, being the length of each adaptive filter. This is twice the com- plexity of a conventional least-mean-square (LMS) algorithm. Other algorithms that have been proposed for stereophonic AEC are (i) multichannel affine projection (AP) algorithm [10]; and (ii) exclusive maximum (XM) selective adaptation of filter taps [11]–[13]. While the XM selective-tap adaptation can be applied to the normalized least-mean-square (NLMS), AP and RLS al- gorithms with and without nonlinear processing, it was noticed that a combination of nonlinearities along with the selective-tap adaptation is more effective [11]. 1053-587X/$25.00 © 2009 IEEE Authorized licensed use limited to: The University of Utah. Downloaded on August 7, 2009 at 16:36 from IEEE Xplore. Restrictions apply.

Transcript of IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8 ...hrao/Journal09.pdf · IEEE TRANSACTIONS...

Page 1: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8 ...hrao/Journal09.pdf · IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009 2919 Fast LMS/Newton Algorithms

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009 2919

Fast LMS/Newton Algorithms for StereophonicAcoustic Echo Cancelation

Harsha I. K. Rao, Student Member, IEEE, and Behrouz Farhang-Boroujeny, Senior Member, IEEE

Abstract—This paper presents a new class of adaptive filteringalgorithms to solve the stereophonic acoustic echo cancelation(AEC) problem in teleconferencing systems. While stereophonicAEC may be seen as a simple generalization of the well-knownsingle-channel AEC, it is a fundamentally far more complex andchallenging problem to solve. The main reason being the strongcross correlation that exists between the two input audio chan-nels. In the past, nonlinearities have been introduced to reducethis correlation. However, nonlinearities bring with it additionalharmonics that are undesirable. We propose an elegant lineartechnique to decorrelate the two-channel input signals and thusavoid the undesirable nonlinear distortions. We derive two lowcomplexity adaptive algorithms based on the two-channel gradientlattice algorithm. The models assume the input sequences to theadaptive filters to be autoregressive (AR) processes whose ordersare much lower than the lengths of the adaptive filters. Thisresults in an algorithm, whose complexity is only slightly higherthan the normalized least-mean-square (NLMS) algorithm; thesimplest adaptive filtering method. Simulation results show thatthe proposed algorithms perform favorably when compared withthe state-of-the-art algorithms.

Index Terms—Adaptive filters, lattice orthogonalization, LMS/Newton, stereo acoustic echo cancellation (AEC).

I. INTRODUCTION

T HE past few years have witnessed the use of multichannelaudio in teleconferencing systems. In particular, stereo-

phonic systems are desirable as they provide the listener withspatial information to help distinguish possibly simultaneoustalkers [1]. Acoustic echo cancelers are a necessary componentof such teleconferencing systems as they remove the undesiredechoes that result from the coupling between the microphoneand the loudspeakers [2]–[4]. This work proposes a new classof adaptive algorithms to solve the stereophonic acoustic echocancelation (AEC) problem.

The setup of a typical stereophonic acoustic echo canceler asit exists in a teleconferencing system is shown in Fig. 1 [1], [5].A transmission room is shown on the left, wherein two micro-phones are used to pick up the signals from a source via twoacoustic channels characterized by the room impulse responses

and . The stereophonic signals are transmitted tothe loudspeakers in the receiving room. These loudspeakers are

Manuscript received October 17, 2008; accepted March 07, 2009. First pub-lished April 07, 2009; current version published July 15, 2009. The associateeditor coordinating the review of this manuscript and approving it for publica-tion was Prof. Jonathon A. Chambers.

The authors are with the Department of Electrical and Computer Engineering,University of Utah, Salt Lake City, UT 84112 USA (e-mail: [email protected];[email protected]).

Digital Object Identifier 10.1109/TSP.2009.2020356

coupled to one of the microphones via the acoustic channels de-noted by and . A conventional acoustic echo can-celer will try to model the acoustic paths in the receiving roomusing two finite-impulse response (FIR) adaptive filters,and . If denotes the echo picked up by the micro-phone, then the two adaptive filters will use the input signals

and to produce an estimate of represented by. The difference between and should produce a

residual echo signal close to zero. While this may seemlike a straightforward extension of the single-channel AEC, wemay note that the inputs and are derived from thesame source and hence are highly correlated. The strongcross correlation between and can create problemsin implementing adaptive algorithms.

The fundamental problem of stereophonic AEC was first ad-dressed in [1] and more insight on this problem has been pro-vided in [5]. An important result shown in [5] was that if de-notes the length of far-end room echo path, is the length of themodeling adaptive filters and is the length of the near-endroom echo path, then for , there exists a unique solu-tion. It was also shown that for , a misalignment willexist for both the single-channel/monophonic case as well asthe stereophonic system. However, the problem is much greaterin the two-channel setup due to the strong cross-correlation ef-fects. Also, the use of nonlinearities to decorrelate the input sig-nals was first proposed in [5] and further investigated in [6].While the various versions of recursive least-squares (RLS) al-gorithm [7], [8] can provide excellent echo cancellation, thecomputational requirements for their implementation preventsthem from being practical algorithms. Moreover, the strong cor-relation among the signals in the two channels results in a highlyill-conditioned covariance matrix and this in turn makes RLS al-gorithms sensitive to numerical errors. A leaky extended least-mean-square (XLMS) algorithm was proposed in [9] that aimsto reduce the interchannel correlation without the addition ofnonlinearities, hence, the quality and perception of speech sig-nals remains unaffected. The leaky XLMS algorithm was shownto perform satisfactorily at the expense of multiplications,being the length of each adaptive filter. This is twice the com-plexity of a conventional least-mean-square (LMS) algorithm.Other algorithms that have been proposed for stereophonic AECare (i) multichannel affine projection (AP) algorithm [10]; and(ii) exclusive maximum (XM) selective adaptation of filter taps[11]–[13]. While the XM selective-tap adaptation can be appliedto the normalized least-mean-square (NLMS), AP and RLS al-gorithms with and without nonlinear processing, it was noticedthat a combination of nonlinearities along with the selective-tapadaptation is more effective [11].

1053-587X/$25.00 © 2009 IEEE

Authorized licensed use limited to: The University of Utah. Downloaded on August 7, 2009 at 16:36 from IEEE Xplore. Restrictions apply.

Page 2: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8 ...hrao/Journal09.pdf · IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009 2919 Fast LMS/Newton Algorithms

2920 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009

Fig. 1. Setup of a stereophonic AEC system.

It is widely known that the backward prediction-error compo-nents of a lattice predictor are orthogonal [14]. This particularcharacteristic can be incorporated efficiently into a joint estima-tion setup to improve the convergence rate of the adaptive LMSalgorithm [14], [15]. The basic cell of a two-channel lattice pre-dictor has been described in [16] and this idea was used to derivealgorithms for stereo echo cancellation using lattice orthogonal-ization and adaptive structures [17]. However, the lattice pre-dictor has a complexity of , [17], which is prohibitivefor large values of .

This paper develops a stereophonic extension of the fastLMS/Newton algorithm of [18]. This algorithm makes use ofthe orthogonalization property of the two-channel lattice pre-dictor. We note that because of significant differences betweenthe single-channel and multichannel lattice structures and theirdetailed properties, this extension is not straightforward. Inparticular, in a single-channel lattice structure, the close rela-tionship between the forward and backward predictors greatlysimplifies the development of the LMS/Newton algorithm of[18]. While the same simplifications do not exist in a multi-channel setup [16], [19], it is still possible to use the propertiesof the two-channel lattice predictor to derive a two-channelversion of the LMS/Newton algorithm. Autoregressive (AR)modeling for the purpose of linear predictive coding in speechprocessing has been found to be quite effective in the past.Usually, a model order in the range of 5 to 10 is more than suf-ficient to code speech signals [15]. This provided the rationaleto model the input signals and as AR processesof order . Consequently, only a few stages of thelattice predictor are sufficient to decorrelate the signals in thetwo channels of the acoustic echo canceler. The computationalcomplexity reduces drastically and simulation results showthat the two versions of the algorithm that we propose performfavorably when compared with the other existing solutions forstereophonic AEC. The use of this simple linear technique aimsto achieve the decorrelation without the use of any nonlinear-ities, thereby fully preserving the stereophonic quality of thespeech signals.

The rest of this paper is organized as follows. In Section II,we briefly describe the two-channel lattice predictor. The

derivation of the two-channel gradient lattice adaptive al-gorithm is provided in Section III. The two versions of ourLMS/Newton algorithms for the stereophonic setup will bepresented in Section IV. The simulation results are discussed inSection V. Conclusions are drawn in Section VI.

In what follows, we have denoted vectors and matrices usingbold-faced lower-case and upper-case characters, respectively,and vectors are always in column form. The superscript de-notes vector or matrix transpose.

II. TWO-CHANNEL LATTICE PREDICTOR

The characteristics of a lattice predictor makes it an attrac-tive proposition in adaptive filtering. It forms an integral partof a class of LMS/Newton algorithms that have been derivedfor the single-channel case in [18]. To facilitate the extension ofthis class of algorithms to stereophonic AEC, in this section, weexamine the properties of the two-channel lattice predictor.

A gradient-type lattice predictor algorithm has been derivedfor the purpose of stereophonic AEC in [17]. The two-channellattice cell used in this algorithm was first derived in the least-squares context for multichannel adaptive filtering in [16]. Thestructure of a basic cell of the two-channel lattice predictor isshown in Fig. 2. From the figure, we note that the equations forupdating the forward and backward prediction-errors of the thcell can be written as

(1a)

(1b)

where ,are the 2 1 backward and for-

ward prediction-error vectors, respectively, and the 2 2reflection coefficient matrix is given by

The initialization of the lattice predictor is done as

(2a)

(2b)

Authorized licensed use limited to: The University of Utah. Downloaded on August 7, 2009 at 16:36 from IEEE Xplore. Restrictions apply.

Page 3: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8 ...hrao/Journal09.pdf · IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009 2919 Fast LMS/Newton Algorithms

RAO AND FARHANG-BOROUJENY: FAST LMS/NEWTON ALGORITHMS 2921

Fig. 2. Two-channel lattice cell.

A simple gradient adaptive algorithm can be used to com-pute in a recursive fashion. The reflection coefficients ofthe th cell can be chosen so as to minimize the instantaneousbackward and forward prediction-errors of the correspondingcell. This leads to the following [17]

(3)

where is the adaptation step-size parameter and is a constantadded to prevent gradient noise amplification, when any of thepower terms are small. The powers are estimatedusing the following recursive equations

(4)

where is a constant close to but smaller than one.It has been noted in [17] that upon convergence of the

reflection coefficients, the above algorithm achieves completeorthogonalization of the two-channel backward prediction-er-rors, i.e., it has been assumed that ,where , is a diag-onal matrix. While this may hold good in certain scenarios,such as the case when the two input signals are uncorrelated,it is not true, in general. A more rigorous examination re-veals that for a stationary two-channel stereo input vector set

, the backward prediction-error vectorsobtained using a lattice filter forms

an orthogonal set, i.e., is uncorrelated with for. In other words, , where

is the Kronecker delta that takes the value of 0 for

Fig. 3. Plot of the covariance matrix � � ���������� �.

and 1 for . However, the 2 2-element autocorrelationmatrix may be nondiagonal. Thisimplies that is a block diagonal matrix [20].

Simulations are performed to examine the contribution of thefirst off-diagonal elements of . Fig. 3 shows a plot of themagnitude of the elements of . Results are obtained by aver-aging over 10 independent runs and the length of each adaptivefilters are chosen to be 32 taps. The inputs to the adaptive filters

and are generated by filtering a zero-mean, unitvariance Gaussian sequence through two independent far-endroom echo paths, each having length . The roomecho paths are chosen to be independent, zero-mean Gaussiansequences, each having a variance that decays at the rate of ,where is the sample number. This experiment clearly showsthat the magnitude of the first off-diagonal elements in each

is around 50% of the magnitude of the main diagonalelements and hence its contribution cannot be ignored. As ex-pected, the magnitudes of the remaining off-diagonal elementsare much smaller and tend towards zero.

Authorized licensed use limited to: The University of Utah. Downloaded on August 7, 2009 at 16:36 from IEEE Xplore. Restrictions apply.

Page 4: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8 ...hrao/Journal09.pdf · IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009 2919 Fast LMS/Newton Algorithms

2922 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009

III. TWO-CHANNEL GRADIENT LATTICE

ADAPTIVE ALGORITHM

A transversal filter used to estimate the echo picked up bythe microphone from the input sequences andcan be implemented using the two-channel lattice structure. Theoutput of the transversal filters as shown in Fig. 1 canbe obtained as a linear combination of the backward predic-tion-error vector . The lattice predictor is used to trans-form the input signals to the backward prediction-errors. Thelinear combiner uses these backward prediction-errors to pro-duce an estimate of the echo. This is referred to as the latticejoint process estimator [15], [17].

In this section, a step-normalized LMS algorithm forthe adaptive adjustment of the linear combiner part of atwo-channel lattice joint process estimator is developed.The linear combiner coefficients represented by

are updated using the following adaptive equation [15], [21]

(5)

where is the adaptation step-size parameter and

is a vector containing the backward prediction-errors ofboth the channels. The error signal ,where is the desired signal as shown in Fig. 1.

In [17], was assumed to be a diagonal matrix. Accord-ingly, the step-normalization matrix was chosen to be a

diagonal matrix with the diagonal elements consistingof

, where ,, 2, denotes the powers of the back-

ward prediction-errors. The diagonal elements of areused to normalize the power of the elements along the diagonalof . The powers of the backward predic-tion-errors are computed recursively as

(6)

In this paper, we correct the above normalization process toaccount for the block diagonal structure of . We choose thestep-normalization matrix to be a block diag-onal matrix with the block diagonal elements consisting of the2 2 matrices , where

(7)

and are computed using (6) and is com-puted recursively as

(8)

We shall now briefly analyze the behavior of the mean valuesof the linear combiner coefficients. If denotes the optimumlinear combiner coefficients in a joint estimation setup, then wehave, [15],

(9)

where and . If wedefine a coefficient error vector as , we notethat

(10)

where

(11)

According to the principle of orthogonality, we have[15]. Then it follows from (5), (10)

and the independence assumption commonly used in adaptivefiltering literature [14], [15] that

(12)From (12), we note that the convergence behavior of thetwo-channel gradient lattice adaptive is controlled by theeigenvalues of the matrix . Each eigen-value will determine a particular mode of convergence in thedirection defined by its associated eigenvector [15]. Thus, anappropriate choice of the normalization matrix will result in

. Consequently, the joint processestimator will be controlled by just one mode of convergence.In Fig. 4, we present pictorial representations of the normalizedcovariance matrix for the case when (a) is thediagonal matrix as defined in [17]; (b) is the block diagonalmatrix constructed using (7); and is the covariance matrixas shown in Fig. 3. The inputs to the adaptive filtersand are the same as the ones described in Section IIwhile simulating Fig. 3. These plots clearly indicate that onlythe covariance matrix normalized using the block diagonalmatrix will equalize the eigenvalues and hence result inequal modes of convergence. This confirms the validity of ourargument that normalization should be performed using theblock diagonal matrix.

A. Misadjustment of Lattice Joint Process Estimator

It is important to note that in a two-channel lattice jointprocess estimator, the reflection coefficients and the linearcombiner coefficients are being updated simultaneously. Con-sequently, any change in the reflection coefficients will requirereadjustment of the linear combiner coefficients and this willlead to a significant increase in the steady-state mean-squareerror (MSE). This particular characteristic of the lattice jointprocess estimator is discussed in detail for a single-channelsetup in [15]. It is demonstrated in [15] that the adaptationof the reflection coefficients has to be stopped after someinitial convergence to achieve a low steady-state mean-squareerror. However, in the case of speech inputs, the optimumreflection coefficients are time-varying since they have to trackthe time-varying statistics of the inputs. Hence, this requirescontinuous adaptation of the reflection coefficients as well as

Authorized licensed use limited to: The University of Utah. Downloaded on August 7, 2009 at 16:36 from IEEE Xplore. Restrictions apply.

Page 5: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8 ...hrao/Journal09.pdf · IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009 2919 Fast LMS/Newton Algorithms

RAO AND FARHANG-BOROUJENY: FAST LMS/NEWTON ALGORITHMS 2923

Fig. 4. Normalized covariance matrix��� � with (a) the wrong assumption that��� is a diagonal matrix, and (b) the correct assumption that��� is a block diagonalmatrix.

Fig. 5. Comparison of the MSE. (Solid—the wrong assumption that ��� is adiagonal matrix and Dashed—the correct assumption that��� is a block diagonalmatrix).

the linear combiner coefficients, which will result in a furtherincrease in the MSE.

We will demonstrate the above phenomenon by a simulationexample. We will also use this example to compare the perfor-mance of the two-channel gradient lattice adaptive algorithmnormalized using the diagonal matrix of [17] with the samealgorithm normalized using the block diagonal matrix as de-fined in (7). Fig. 5 presents a pair of learning curves for the mod-eling problem using the same set of inputs, far-end room andnear-end room conditions as described previously. We selected

, and uncorrelated noise is added to suchthat a signal-to-noise-ratio (SNR) of 40 dB is achieved in oursimulations. To demonstrate the effect of the perturbations of thereflection coefficients denoted by , we stopped adapting

from iteration 30 000 onward. It is apparent that the con-tinuous adaptation of the reflection coefficients has a significantimpact on the MSE. Once the adjustment of the reflection coef-ficients is stopped, the algorithm converges quickly to the noisefloor level. While both the pairs of learning curves eventuallyconverge to the same steady-state MSE, we can clearly see that

the slow modes of convergence are absent (or, at least, less dom-inant) when the block diagonal matrix is used for normalizationinstead of the diagonal matrix.

While this simulation example provides further validationof our earlier observations, the continuous perturbation ofthe reflection coefficients will limit the applicability of thegradient lattice adaptive algorithm. Furthermore, the high com-putational complexity of the two-channel lattice joint processestimator makes its implementation impractical for very longfilter lengths. In the rest of this paper, we will derive and studytwo low complexity adaptive algorithms that make use of theproperties of the two-channel gradient lattice algorithm andat the same time are insensitive to the perturbations of thereflection coefficients.

IV. LMS/NEWTON ALGORITHMS BASED ON AR MODELING

Two versions of the LMS/Newton algorithm based on autore-gressive modeling were proposed for the single-channel case in[18]. These were based on the fact that the input sequence (aspeech signal) to the adaptive filter can be modeled as an ARprocess of order , where can be much smaller than the filterlength . This results in an efficient way of updatingwithout having to estimate , where is an estimate of theinput correlation matrix and, here,is the mono-channel filter input vector. In this section, wederive the LMS/Newton algorithms for the stereophonic setup.

For a stereophonic system, we note that can be ex-pressed as

(13)

where is the filter input vector given as

and is the transformation matrix [14] andhas the form

......

.... . .

......

(14)

Authorized licensed use limited to: The University of Utah. Downloaded on August 7, 2009 at 16:36 from IEEE Xplore. Restrictions apply.

Page 6: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8 ...hrao/Journal09.pdf · IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009 2919 Fast LMS/Newton Algorithms

2924 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009

where is a 2 2 identity matrix, is a 2 2 zero matrixand each is a 2 2 backward predictor coefficient ma-trix. Equation (13) is also known as the Gram-Schmidt orthogo-nalization algorithm [14]. This algorithm provides a one-to-onecorrespondence between the input vector and the back-ward prediction-error vector . From (13), it follows that

(15)

On the other hand, the update equation for the idealLMS/Newton algorithm is [14], [15]

(16)

where. It is important to note that any

perturbations in the reflection coefficients is only present in, which is incorporated in the update equation as a part of

the step-size. The other terms in the update part of (16),and are independent of any perturbations. Consequently,based on the assumption that is very small, the LMS/Newtonalgorithm remains robust despite the continuous adaptation ofthe reflection coefficients. However, in (5), the linear combinercoefficients are updated using the backward prediction-errorvector and the error signal , which is also computedusing . The time-varying nature of the reflection coeffi-cients will result in time-varying backward prediction-errorsand thus, adversely affect the MSE performance.

Using (13) and (15), (16) can be written as

(17)

where

(18)

The significance of (17) is that the computation of ac-cording to (18) can be performed at a low complexity. In thesequel, we derive two implementations of the LMS/Newton al-gorithms based on (17) and (18). Henceforth, we will refer tothem as Algorithm 1 and Algorithm 2.

A. Algorithm 1

This algorithm will involve the direct implementation of (18)through the use of a lattice predictor. Since we are assuming the

input sequence to be an AR process of order ,a lattice predictor of order is sufficient. The matrix thentakes the form of (19), shown at the bottom of the page, and thevector takes the form of

(20)

The special structure of in (20) requires us to update onlythe first elements of . The remaining elementsare just the delayed versions of .

We first consider the multiplication of by . It in-volves the estimation of the powers of through .The powers of these backward prediction-error vectors are com-puted using (6) and (8). Unlike the single-channel implementa-tion in [18], wherein the normalization matrix is diagonal,we have to now consider the block diagonal structure of .Hence, the 2 2 normalization matrix , isconstructed as described in (7). We note that in a typical acousticecho canceler, can be chosen to be as small as 8 and inverting

matrices of size 2 2 constitutes only a small percentageof acoustic echo canceler complexity.

To complete the computation of , we now have to mul-tiply by (according to (18)). A close examinationof the structures of the matrix and vector described in(19) and (20), respectively, will reveal that in order to compute

, only the first and the last elements ofneed to be computed. The remaining elements of are thedelayed versions of its th and nd elements.The elements of can be estimated using the two-channelLevinson-Durbin algorithm [20] and we note that the coeffi-cients of prediction filters of order 1 to need to be computed.Accordingly, we formulate the Algorithm 1 as shown.

1) Run the lattice predictor of order using the (1)–(4)and (6)–(8) to obtain the reflection coefficients and thebackward prediction-errors.

2) Run the two-channel Levinson-Durbin algorithm toconvert the reflection coefficients to the backwardpredictor coefficients ’s. As the derivation of thetwo-channel version of the Levinson-Durbin algorithm

......

. . ....

.... . .

......

. . ....

......

. . ....

.... . .

......

. . ....

(19)

Authorized licensed use limited to: The University of Utah. Downloaded on August 7, 2009 at 16:36 from IEEE Xplore. Restrictions apply.

Page 7: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8 ...hrao/Journal09.pdf · IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009 2919 Fast LMS/Newton Algorithms

RAO AND FARHANG-BOROUJENY: FAST LMS/NEWTON ALGORITHMS 2925

is not commonly found in literature, we have provideda derivation of it in Appendix A. The equations for thisparticular step are shown here for completeness.

(21)

where each is a 2 2 forward predictor coefficientmatrix.1

3) Compute the elements that are the delayed versions of theth and nd elements of as follows

(22a)

(22b)

Note will be the th 2 1vector component of .

4) Compute the first elements of . Ifdenotes the first elements of the normalizedbackward prediction-error vector such that

(23)

where , is as defined in (7),, and

denotes the top-left part of having dimension, then the first elements

of are

(24)

1It is important to note that in the single-channel lattice, the forwardand backward predictor coefficients are related according to the equation� � � [15]. However, such a relationship does not hold ina two-channel lattice. Thus, some simplifications that are applicable tosingle-channel lattice equations are inapplicable to the two-channel case.Consequently, direct mimicking of the results of [18] is not possible here and,thus, we provide a fresh derivation of Algorithms 1 and 2, independent of [18].

5) Similarly, compute the last elements of . Letdenote the last elements of the normalized

backward prediction-error vector such that

(25)

and is the bottom-right part of having dimension, then the last elements of are

(26)

6) Finally, compute the adaptive filter output, the error signal

and update the filter taps using (17).

To implement the lattice predictor using (1)–(4) and (6)–(8),we require multiplications. The Levinson-Durbin al-gorithm given in (21) requires multiplications. Wefurther need multiplications to updateusing (24) and (26). Finally, multiplications are required toadaptively update the transversal filter coefficients. Hence, inorder to implement the fast LMS/Newton Algorithm 1, we re-quire a total of multiplications. Thenumber of the required additions is about the same. Typically,

can take a value of 8 and the adaptive filter length may be1500, for a medium size office room. With these numbers, eachupdate of would make up only 17% of the total computa-tional complexity of the acoustic echo canceler.

B. Algorithm 2

The two-channel LMS/Newton Algorithm 1 is structurallycomplicated despite having reasonably low computational com-plexity. Manipulating the data is not all that straightforward andhence it is more suitable for implementing using software. Wenow propose an alternate algorithm that is computationally lesscomplex and can be easily implemented in hardware.

If we look at the matrix given in (19), we observe thatonly the first rows of this matrix are uniquelyrepresented. The remaining rows are just the delayed ver-sions of the th and nd rows given by

. Now, if we areable to remove the first rows of out of our computationof , then we shall be able to simplify Algorithm 1. Thisleads us to the development of the fast LMS/Newton Algorithm2. This particular version of the LMS/Newton algorithm can bedeveloped by extending the input and tap-weight vectorsand , to the following vectors

(27)

Authorized licensed use limited to: The University of Utah. Downloaded on August 7, 2009 at 16:36 from IEEE Xplore. Restrictions apply.

Page 8: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8 ...hrao/Journal09.pdf · IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009 2919 Fast LMS/Newton Algorithms

2926 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009

and

(28)

respectively, and applying (16) to update theextended tap-weight vector . We also need toappropriately take care of the dimensions of and .Since we are only interested in the tap-weights corresponding to

,the first and the last elements of the extendedtap-weight vector can be permanently set to zero andassigning a zero step-size parameter to all of them. Thiswill also remove the computation of the first andthe last elements of . Hence therecursive equation is now modified to

(29)

where

(30)

is a matrix defined as (31), shownat the bottom of the page, and is a matrixdefined as shown in (32) at the bottom of the page. Upon ex-amining (30), we can see that it is only necessary to update thefirst 2 1 element vector of and then the first 2

1 element vector of the final result, . The remaining el-ements will be the delayed versions of these first two elements.

Recall that the forward and backward prediction-errors aregiven as

(33)

and

(34)

respectively. It is well-known that in a single-channel lattice, and this relationship be-

tween the forward and backward predictor coefficients was usedto derive the single-channel LMS/Newton algorithm in [18]. Ina two-channel lattice, for only[refer to (A6a) and (A8a) in the Appendix A]. But based onthe perspective gained from extensive experimentation, we ob-served that this relationship also approximately holds true for

. Hence, we introduce the approximate relation-ship between the forward and backward predictor coefficients as

(35)

The main motivation behind introducing this approximation isto use the transposed backward predictor coefficients in reverseorder to estimate the forward prediction-errors. Consequently,we can rewrite (33) as

(36)

From (31) and (34), we recognize that the filtering of theinput vector through a backward prediction-errorfilter to obtain is equivalent to evaluating .The backward prediction-error vector is normal-ized with , where is the 2 2 ma-trix constructed according to (7). This will give us an updateof . We then use the normalized backward pre-diction-error vector as an input toa filter whose coefficients are the transposed duplicates of thebackward prediction-error filter in reverse order. We recognizefrom (32) and (36) that this filter turns out to be the forwardprediction-error filter, assuming that (35) holds. As a result, theoutput of the forward prediction-error filter will provide us withthe samples of the vector . Thus,we can see that the approximation introduced in (35) facili-tated the development of an algorithm that can be efficientlyimplemented on hardware. At the same time, this algorithmwas shown to satisfactorily exhibit the fast converging charac-teristics of the LMS/Newton algorithm over a wide range ofexperiments.

......

. . ....

.... . .

.... . .

......

(31)

......

. . ....

.... . .

.... . .

......

(32)

Authorized licensed use limited to: The University of Utah. Downloaded on August 7, 2009 at 16:36 from IEEE Xplore. Restrictions apply.

Page 9: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8 ...hrao/Journal09.pdf · IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009 2919 Fast LMS/Newton Algorithms

RAO AND FARHANG-BOROUJENY: FAST LMS/NEWTON ALGORITHMS 2927

In a practical implementation, we choose the original input tothe predictor filter to be and not . To account forthis delay, the desired signal is delayed by samples to betime aligned with . This will result in a delayed LMS algo-rithm whose performance is very close to its nondelayed versionwhen . Moreover, since the power terms are assumedto be time invariant over the length of the prediction filters, thenormalization block is moved to the output of the forward pre-diction-error filter. Accordingly, we formulate Algorithm 2 asshown.

1) Run the lattice predictor of order using the (1)–(4)and (6)–(8) to obtain the reflection coefficients and thebackward prediction-error vector .

2) Compute the elements of that are the delayedversions of its first two elements.

(37a)

(37b)

3) Run the lattice predictor of order using the (1) and (2)(reflection coefficients have already been computed inStep 1) with as the input to obtain the forwardprediction-error vector .

4) Compute the first two elements of to be the firsttwo elements of premultiplied with the 2 2normalization matrix .

This particular version of the LMS/Newton algorithm is com-putationally less intensive when compared to Algorithm 1. Toimplement the lattice predictor using (1)–(4) and (6)–(8), werequire multiplications. Updating using the for-ward prediction-error filter requires a further multipli-cations. If we include the adaptive transversal filter updates, Al-gorithm 2 requires a total of multiplications.Thus, for and , updating constitutesonly 4% of the total complexity of the acoustic echo canceler.

V. SIMULATION RESULTS

A. Experiments With a Stationary Signal

We shall now present the simulation results and compare theperformance of the two versions of the LMS/Newton algorithmwith the NLMS algorithm, the XM selective-tap NLMS imple-mentation of [11] and the leaky XLMS algorithm of [9]. Thoughthe computational complexity of the leaky XLMS algorithm isalmost twice that of the NLMS algorithm [9], it does not addany signal distortion and hence provides a suitable benchmarkfor comparison. While Algorithm 1 will be an exact implemen-tation of the ideal two-channel LMS/Newton algorithm, an ap-proximation was introduced in the form of (35) while derivingAlgorithm 2. Hence, we will refer to them in our results as ExactAlgorithm 1 and Approximate Algorithm 2, respectively. Wealso note that we have not added any form of nonlinearity toany of these algorithms. The room echo paths are independent,zero-mean Gaussian sequences, each having a variance that de-cays at the rate of , where is the sample number. Sev-eral experiments confirmed that this particular model generates

responses that closely approximate the characteristics of a typ-ical room echo path as depicted in [22]. The length of the mod-eling adaptive filters is set equal to 1024 and the length of thenear-end room echo paths is also selected to be the same.Upon extensive experimentation, it was observed that the algo-rithms exhibited satisfactory performance when the order of theAR model is chosen to be 8. The reference inputs to the adap-tive filters are generated by filtering a zero-mean, unit varianceGaussian sequence through the two far-end room echo paths,each having length equal to 2048. This particular value of

will satisfy the condition for the existence of a unique solu-tion, i.e., [5]. In all our simulations, we have selected

, and the power terms in (4), (6) and (8)are initialized to one. For the XM-NLMS algorithm, we chosethe size of the tap-selection set (denoted by the parameter in[11]) to be . For the leaky XLMS algorithm, we chose thecorrelation coefficient to be 0.5 and the leakage factor to be

. The notations are the same as those used by the authorsin [9]. Uncorrelated noise is added to such that a SNR of40 dB is achieved and the simulation results are obtained afteraveraging over 10 independent runs, for each case.

Fig. 6(a) compares the MSE and Fig. 6(b) compares the nor-malized misalignment2 curves of the two versions of the LMS/Newton algorithms with the other algorithms. We can see thatamong all the implementations, our algorithms converge thefastest. We recognize that Algorithm 1 is an exact implemen-tation of the ideal LMS/Newton algorithm for which extensivetheoretical analysis is available in [4], [14], and [15]. The idealLMS/Newton algorithm does not suffer from any eigenvaluespread and has only one mode of convergence. Hence, it is par-ticularly important to note that the Exact Algorithm 1, as ex-pected, is governed by a single mode of convergence. Moreover,this also holds approximately in the case of Approximate Algo-rithm 2.

We have also studied the adaptation of the algorithms whenthere is an abrupt change in the far-end room echo paths. Thenew room echo paths are also independent, zero-mean Gaussiansequences, but each of them have a variance that increases at therate of , where is the sample number. While this model maynot describe a typical room echo path, the main intention wasto observe the behavior of the algorithms in the likelihood of adrastic change in the far-end room. Simulation results as shownin Fig. 7 indicate that when compared with the other implemen-tations, the LMS/Newton algorithms exhibit a faster responseto the echo path change. Also, improvement in misalignment ismore significant.

B. Experiments With Speech Signals

The algorithms are also tested to evaluate their effectivenesswhen speech signals are used as inputs. Training adaptive filtersusing speech is challenging because of the nonstationary natureof speech signals and their wide dynamic range of magnitudes.Fortunately, the presence of the normalization factor infront of the stochastic gradient vector in (16) leavesthe step-size independent of the power of the input signal.

2Normalized misalignment is computed as ����� � ����� ������� ,where ���� is the true room echo path and ���� is the estimated room echopath.

Authorized licensed use limited to: The University of Utah. Downloaded on August 7, 2009 at 16:36 from IEEE Xplore. Restrictions apply.

Page 10: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8 ...hrao/Journal09.pdf · IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009 2919 Fast LMS/Newton Algorithms

2928 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009

Fig. 6. (a) Comparison of the MSE. (b) Comparison of the misalignment. (Solid—Exact Algorithm 1, Dashed—Approx. Algorithm 2, Dotted—NLMS, Dash-dotted—XM-NLMS and Thick Solid—Leaky XLMS).

Fig. 7. (a) Comparison of the MSE. (b) Comparison of the misalignment. An abrupt change is made in the far-end room at iteration 150,000. (Solid—ExactAlgorithm 1, Dashed—Approx. Algorithm 2, Dotted—NLMS, Dash-dotted—XM-NLMS and Thick Solid—Leaky XLMS).

Thus, the LMS/Newton algorithm resolves the problem of dy-namic range of the input process. Hence, we compare the perfor-mance of our algorithm with the NLMS algorithm, which is alsoknown to be robust to the dynamic range of the input process.The NLMS algorithm is implemented using the XM tap-selec-tion technique of [11].

In this particular set of experiments, the room echo paths areindependent, zero-mean Gaussian sequences, each having a vari-ance that decays at the rate of , where is the sample number.We select the adaptive filter length to be 1024 taps. At a sam-pling rate of 8 kHz, this will enable us to model 128 ms of echowhich is reasonable for a medium-size office room. The lengthof the far-end room echo paths is set equal to 2048 and thelength of the near-end room echo paths is selected to be 1024.Once again it was observed that the algorithms exhibited satis-factory performance when the order of the AR model is chosen to

be 8. For the simulation results presented here, we have selected, and the power terms in (4), (6) and (8) are

initialized to one. For the XM-NLMS algorithm, we once againchose the size of the tap-selection set to be . A zero-meanGaussian sequence whose variance was set at 40 dB below thevariance of the echo signal is added to it. The simulation resultsare obtained after averaging over 10 independent runs.

We compare the performances of the different algorithmswhen there is an abrupt change in the far-end room. The mea-sure of the echo-return-loss enhancement (ERLE), [17], [18],is chosen as a metric for performance evaluation. The ERLE isdefined as

(38)

Authorized licensed use limited to: The University of Utah. Downloaded on August 7, 2009 at 16:36 from IEEE Xplore. Restrictions apply.

Page 11: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8 ...hrao/Journal09.pdf · IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009 2919 Fast LMS/Newton Algorithms

RAO AND FARHANG-BOROUJENY: FAST LMS/NEWTON ALGORITHMS 2929

Fig. 8. (a) Comparison of the ERLE. (b) Comparison of the misalignment. An abrupt change is made in the far-end room after 18.75 s. (Solid—Exact Algorithm1, Dashed—Approx. Algorithm 2 and Dotted—XM-NLMS).

where is the echo picked up by the microphone, isthe uncorrelated Gaussian noise added to , and is theresidual echo signal transmitted back to the far-end room. Themeasured ERLE’s are based on averages of 1000 neighboringsamples for each point of the plots. Figs. 8(a) and 8(b) comparesthe ERLE and the misalignment of the XM-NLMS algorithmwith the LMS/Newton Algorithm 1 and Algorithm 2, respec-tively. From these figures, it is clear that the results are consistentwith those for the case when the input is white and highlight thesuperior performance of the proposed algorithms as comparedwith the XM-NLMS algorithm. Moreover, the Exact Algorithm 1performs better as expected, than the Approximate Algorithm 2.

VI. CONCLUSION

We presented a two-channel gradient lattice adaptive algo-rithm for the problem of stereophonic AEC. The limitations ofthis algorithm led to the development of two new implementa-tions of the two-channel version of the LMS/Newton algorithmand their application to stereo echo cancellation. The implemen-tations provide for efficient realization of long adaptive filters.An exact implementation of the LMS/Newton algorithm wasderived but the structural complexity of this algorithm mightlimit its applicability in a hardware/custom chip. A second al-gorithm was proposed to overcome this limitation. The ill-con-ditioned nature of the stereophonic AEC problem can result ina high misalignment [5]. This will lead to adaptive algorithmsbeing sensitive to changes in the far-end room [2], [8]. How-ever, the fast convergence of our algorithms helps to alleviatethis problem to a great extent.

APPENDIX ADERIVATION OF THE TWO-CHANNEL

LEVINSON-DURBIN ALGORITHM

If forms an orthogonal basis setfor , then the backward pre-diction-error vectors are given by

(A.1)

Similarly, the forward prediction-error vectors are given by

(A2)

As in the well-known single-channel case [15], we can updatethe two-channel backward prediction-error and forward predic-tion-error vectors as

(A3)

and

(A4)

respectively.Using (A1) and (A2), we can expand (A3) as

(A5)

Upon equating the coefficients of , we have

(A6a)

(A6b)

Similarly, we can work with the forward prediction-errors anduse (A1) and (A2) to expand (A4) as

(A7)

Authorized licensed use limited to: The University of Utah. Downloaded on August 7, 2009 at 16:36 from IEEE Xplore. Restrictions apply.

Page 12: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8 ...hrao/Journal09.pdf · IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009 2919 Fast LMS/Newton Algorithms

2930 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 8, AUGUST 2009

Once again, equating coefficients of will lead to thefollowing result

(A8a)

(A8b)

Thus, (A6) and (A8) constitute the two-channel Levinson-Durbin algorithm that is used to convert the reflection coeffi-cients to the predictor coefficients.

ACKNOWLEDGMENT

The authors would like to thank the anonymous reviewerswhose constructive comments and suggestions greatly helpedto improve the quality of the paper.

REFERENCES

[1] M. M. Sondhi, D. R. Morgan, and J. L. Hall, “Stereophonic acousticecho cancellation—An overview of the fundamental problem,” IEEESignal Process. Lett., vol. 2, no. 8, pp. 148–151, Aug. 1995.

[2] J. Benesty, T. Gänsler, D. M. Morgan, M. M. Sondhi, and S. L. Gay,Advances in Network and Acoustic Echo Cancellation. New York:Springer, 2001.

[3] E. Hänsler and G. Schmidt, Topics in Acoustic Echo and Noise Con-trol. Berlin, Germany: Springer-Verlag, 2006.

[4] B. Widrow and S. D. Stearns, Adaptive Signal Processing. Engle-wood Cliffs, NJ: Prentice-Hall, 1985.

[5] J. Benesty, D. R. Morgan, and M. M. Sondhi, “A better understandingand an improved solution to the specific problems of stereophonicacoustic echo cancellation,” IEEE Trans. Speech Audio Process., vol.6, no. 2, pp. 156–165, Mar. 1998.

[6] D. R. Morgan, J. L. Hall, and J. Benesty, “Investigation of several typesof nonlinearities for use in stereo acoustic echo cancellation,” IEEETrans. Speech Audio Process., vol. 9, no. 6, pp. 686–696, Sep. 2001.

[7] J. Benesty, F. Amand, A. Gilloire, and Y. Grenier, “Adaptive filteringalgorithms for stereophonic acoustic echo cancellation,” in Proc. IEEEInt. Conf. Acoust., Speech Signal Process., Detroit, MI, May 1995, vol.5, pp. 3099–3102.

[8] T. Gänsler and J. Benesty, “Stereophonic acoustic echo cancellationand two-channel adaptive filtering: An overview,” Int. J. Adapt. ControlSignal Process., vol. 14, no. 6, pp. 565–586, Aug. 2000.

[9] T. Hoya, Y. Loke, J. A. Chambers, and P. A. Naylor, “Application ofthe leaky extended LMS (XLMS) algorithm in stereophonic acousticecho cancellation,” Signal Process., vol. 64, no. 1, pp. 87–91, Jan. 1998.

[10] J. Benesty, P. Duhamel, and Y. Grenier, “A multichannel affine projec-tion algorithm with applications to multichannel acoustic echo cancel-lation,” IEEE Signal Process. Lett., vol. 3, no. 2, pp. 35–37, Feb. 1996.

[11] A. W. H. Khong and P. A. Naylor, “Stereophonic acoustic echo can-cellation employing selective-tap adaptive algorithms,” IEEE Trans.Audio, Speech Lang. Process., vol. 14, no. 3, pp. 785–796, May 2006.

[12] A. W. H. Khong and P. A. Naylor, “Selective-tap adaptive algorithmsin the solution of the nonuniqueness problem for stereophonic acousticecho cancellation,” IEEE Signal Process. Lett., vol. 12, no. 4, pp.269–272, Apr. 2005.

[13] A. W. H. Khong and P. A. Naylor, “Frequency domain adaptive algo-rithms for stereophonic acoustic echo cancellation employing tap se-lection,” in Proc. IEEE Int. Workshop on Acoust., Echo Noise Control,Eindhoven, The Netherlands, Sep. 2005, pp. 141–144.

[14] S. Haykin, Adaptive Filter Theory. New Delhi, India: Pearson Edu-cation, 2003.

[15] B. Farhang-Boroujeny, Adaptive Filters: Theory and Applications.Chichester, U.K.: Wiley, 1998.

[16] F. Ling and J. G. Proakis, “A generalized multichannel least squareslattice algorithm based on sequential processing stages,” IEEE Trans.Acoust., Speech Signal Process., vol. ASSP-32, no. 2, pp. 381–389,Apr. 1984.

[17] K. Mayyas, “Stereophonic acoustic echo cancellation using lattice or-thogonalization,” IEEE Trans. Speech Audio Process., vol. 10, no. 7,pp. 517–525, Oct. 2002.

[18] B. Farhang-Boroujeny, “Fast LMS/Newton algorithms based on au-toregressive modeling and their application to acoustic echo cancella-tion,” IEEE Trans. Signal Process., vol. 45, no. 8, pp. 1987–2000, Aug.1997.

[19] N. Kalouptsidis and S. Theodoridis, Adaptive System Identification andSignal Processing Algorithms. London, U.K.: Prentice-Hall, 1993.

[20] V. J. Mathews and S. C. Douglas, 2003, Adaptive Filters [Online].Available: http://www.ece.utah.edu/mathews/ece6550/chapter3.pdf

[21] S. S. Narayan, A. M. Peterson, and M. J. Narasimha, “Transform do-main LMS algorithm,” IEEE Trans. Acoust., Speech Signal Process.,vol. ASSP-31, no. 3, pp. 609–615, Jun. 1983.

[22] T. Gänsler and J. Benesty, “New insights into the stereophonic acousticecho cancellation problem and an adaptive nonlinearity solution,” IEEETrans. Speech Audio Process., vol. 10, no. 5, pp. 257–267, Jul. 2002.

Harsha I. K. Rao (S’06) received the B.E. degreein electronics and communication engineering withhighest honors from the National Institute of Tech-nology, Tiruchirappalli, India, in 2003, the M.E de-gree in electrical engineering from the University ofUtah, Salt Lake City, in 2008.

He is currently pursuing the Ph.D. degree in elec-trical engineering. His dissertation has focused onthe problems of acoustic crosstalk cancellation andstereophonic acoustic echo cancellation to achievesound spatialization using a pair-wise loudspeaker

paradigm. From 2003 to 2004, he was a design engineer at ABB in Bangalore,India. His research interests include adaptive filtering and its application inacoustic signal processing.

Behrouz Farhang-Boroujeny (M’84–SM’90)received the B.Sc. degree in electrical engineeringfrom Teheran University, Iran, in 1976, the M.Eng.degree from University of Wales Institute of Scienceand Technology, U.K., in 1977, and the Ph.D. degreefrom Imperial College, University of London, U.K.,in 1981.

From 1981 to 1989 he was with the Isfahan Uni-versity of Technology, Isfahan, Iran. From 1989 to2000, he was with the National University of Singa-pore. Since August 2000, he has been with the Uni-

versity of Utah where he is now a professor and Associate Chair of the de-partment. He is an expert in the general area of signal processing. His currentscientific interests are adaptive filters, multicarrier communications, detectiontechniques for space-time coded systems, cognitive radio, and signal processingapplications to optical devices. In the past, he has worked and has made signif-icant contribution to areas of adaptive filters theory, acoustic echo cancellation,magnetic/optical recoding, and digital subscriber line technologies. He is theauthor of the books Adaptive Filters: Theory and Applications (Wiley, 1998),and Signal Processing Techniques for Software Radios (self-published at Lulupublishing house, 2009).

Dr. Farhang-Boroujeny received the UNESCO Regional Office of Scienceand Technology for South and Central Asia Young Scientists Award in 1987.He served as associate editor of IEEE TRANS. ON SIGNAL PROCESSING fromJuly 2002 to July 2005. He is now serving as associate editor of IEEE SIGNAL

PROCESSING LETTERS. He has also been involved in various IEEE activities,including the chairmanship of the Signal Processing/Communications chapterof IEEE of Utah in 2004 and 2005.

Authorized licensed use limited to: The University of Utah. Downloaded on August 7, 2009 at 16:36 from IEEE Xplore. Restrictions apply.