Null Space and Range Space

7
286 IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 13, NO. 2, MARCH 2005 Convergence Analysis of a Complex LMS Algorithm With Tonal Reference Signals Mrityunjoy Chakraborty, Senior Member, IEEE, and Hideaki Sakai, Senior Member, IEEE Abstract—Often one encounters the presence of tonal noise in many active noise control applications. Such noise, usually gener- ated by periodic noise sources like rotating machines, is cancelled by synthesizing the so-called antinoise by a set of adaptive filters which are trained to model the noise generation mechanism. Per- formance of such noise cancellation schemes depends on, among other things, the convergence characteristics of the adaptive algo- rithm deployed. In this paper, we consider a multireference com- plex least mean square (LMS) algorithm that can be used to train a set of adaptive filters to counter an arbitrary number of periodic noise sources. A deterministic convergence analysis of the multiref- erence algorithm is carried out and necessary as well as sufficient conditions for convergence are derived by exploiting the properties of the input correlation matrix and a related product matrix. It is also shown that under convergence condition, the energy of each error sequence is independent of the tonal frequencies. An optimal step size for fastest convergence is then worked out by minimizing the error energy. Index Terms—Active noise control (ANC), convergence analysis, least mean square (LMS) algorithm. I. INTRODUCTION A CTIVE noise control (ANC) ([1], [2]) is an established procedure for cancellation of acoustic noise where first a model of the noise generation process is constructed, and then using that model, the so-called antinoise is synthesized that destructively interferes with the primary noise field and thus, minimizes its effect. Of the several applications of ANC studied so far, a case of particular interest is the cancellation of periodic noise containing multitonal components generated by rotating machines. Typical examples include propeller aircrafts, motor- boats, vessels etc. that employ multiple engines, which may not remain synchronized perfectly and thus, may have varying speeds of rotation. The references ([3]–[5]) provide an account of major recent developments in this area. The active noise controller for such noise sources is illustrated in Fig. 1. There are independent noise sources representing rotating ma- chines. Usually, the acoustic noise produced by each machine is narrowband, being dominated by a single tone frequency and is modeled as being generated by passing a sinusoid through a LTI filter. Synchronization signal, usually provided by a Manuscript received February 26, 2003; revised December 3, 2003. The as- sociate editor coordinating the review of this manuscript and approving it for publication was Dr. Futoshi Asano. M. Chakraborty is with the Department of Electronics and Electrical Commu- nication Engineering, Indian Institute of Technology, Kharagpur, India (e-mail :[email protected]). H. Sakai is with the Department of System Science, Graduate School of In- formatics, Kyoto University, Kyoto, Japan 606-8501 (e-mail : [email protected] u.ac.jp). Digital Object Identifier 10.1109/TSA.2004.840938 tachometer, provides information about the rotational speed of each machine and is used to generate sinusoidal reference signals. Each reference signal is filtered by an adaptive filter separately and the combined output of all the adaptive filters is used to drive a loudspeaker. The error microphone shown is used as an error sensing device to detect the noise difference between the primary noise and the secondary noise produced by the loudspeaker and to convert this differential noise into an electrical error signal which is used to adjust the adaptive filter weights. The weight adjustment is done iteratively so that in the steady state, the loudspeaker produces an output that has the same magnitude but opposite phase vis-a-vis the primary noise, thus resulting in ideal noise cancellation. Under such situation, the adaptive filters are seen to model each noise source exactly. The adaptive filter algorithm analyzed in this paper for multi- tonal noise cancellation purpose as explained above is the least mean square (LMS) algorithm [6], which is well known for its simplicity, robustness, and excellent numerical error character- istics. Recently, for the special case of two independent noise sources (i.e., ), a twin-reference complex LMS algorithm has been analyzed for convergence in [3]. This analysis first considers the 2 2 deterministic input correlation matrix and using its eigenvectors, constructs a product matrix that turns out to be time invariant. The eigenvalues of the product matrix are then evaluated explicitly as functions of the algorithm step size and convergence conditions on the step size are established by restricting the magnitude of each eigenvalue to lie within one. Such an approach, however, cannot be generalized to the case of an arbitrary number of noise sources, since for higher orders, the eigenvalues become intractably complicated functions of the step size and the noise frequencies. In this paper, we present an alternative approach to the con- vergence analysis of a multireference complex LMS algorithm that processes reference signals where can be any integer. The analysis, like [3], is a deterministic one and to the best of our knowledge, is the first attempt to determine the convergence condition for a general multireference active noise controller. For this, first the eigenstructure of the input correlation matrix is examined to determine its general form. Next, using this, it is shown that for the multireference case too, the product matrix is a time invariant one, even though the input correla- tion matrix in this case does not have a unique eigen decompo- sition unlike the twin-reference problem. The properties of the product matrix are then explored further to show finally that the norm of the weight error vector approaches zero with time, if and only if the step size is chosen from within certain range. An optimal step-size for fastest convergence is then worked out by minimizing the energy of the error sequence which is seen to 1063-6676/$20.00 © 2005 IEEE Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on June 24, 2009 at 04:34 from IEEE Xplore. Restrictions apply.

description

Null Space and Range Space

Transcript of Null Space and Range Space

  • 286 IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 13, NO. 2, MARCH 2005

    Convergence Analysis of a Complex LMS AlgorithmWith Tonal Reference Signals

    Mrityunjoy Chakraborty, Senior Member, IEEE, and Hideaki Sakai, Senior Member, IEEE

    AbstractOften one encounters the presence of tonal noise inmany active noise control applications. Such noise, usually gener-ated by periodic noise sources like rotating machines, is cancelledby synthesizing the so-called antinoise by a set of adaptive filterswhich are trained to model the noise generation mechanism. Per-formance of such noise cancellation schemes depends on, amongother things, the convergence characteristics of the adaptive algo-rithm deployed. In this paper, we consider a multireference com-plex least mean square (LMS) algorithm that can be used to traina set of adaptive filters to counter an arbitrary number of periodicnoise sources. A deterministic convergence analysis of the multiref-erence algorithm is carried out and necessary as well as sufficientconditions for convergence are derived by exploiting the propertiesof the input correlation matrix and a related product matrix. It isalso shown that under convergence condition, the energy of eacherror sequence is independent of the tonal frequencies. An optimalstep size for fastest convergence is then worked out by minimizingthe error energy.

    Index TermsActive noise control (ANC), convergence analysis,least mean square (LMS) algorithm.

    I. INTRODUCTION

    ACTIVE noise control (ANC) ([1], [2]) is an establishedprocedure for cancellation of acoustic noise where first amodel of the noise generation process is constructed, and thenusing that model, the so-called antinoise is synthesized thatdestructively interferes with the primary noise field and thus,minimizes its effect. Of the several applications of ANC studiedso far, a case of particular interest is the cancellation of periodicnoise containing multitonal components generated by rotatingmachines. Typical examples include propeller aircrafts, motor-boats, vessels etc. that employ multiple engines, which maynot remain synchronized perfectly and thus, may have varyingspeeds of rotation. The references ([3][5]) provide an accountof major recent developments in this area. The active noisecontroller for such noise sources is illustrated in Fig. 1. Thereare independent noise sources representing rotating ma-chines. Usually, the acoustic noise produced by each machineis narrowband, being dominated by a single tone frequency andis modeled as being generated by passing a sinusoid througha LTI filter. Synchronization signal, usually provided by a

    Manuscript received February 26, 2003; revised December 3, 2003. The as-sociate editor coordinating the review of this manuscript and approving it forpublication was Dr. Futoshi Asano.

    M. Chakraborty is with the Department of Electronics and Electrical Commu-nication Engineering, Indian Institute of Technology, Kharagpur, India (e-mail:[email protected]).

    H. Sakai is with the Department of System Science, Graduate School of In-formatics, Kyoto University, Kyoto, Japan 606-8501 (e-mail : [email protected]).

    Digital Object Identifier 10.1109/TSA.2004.840938

    tachometer, provides information about the rotational speedof each machine and is used to generate sinusoidal referencesignals. Each reference signal is filtered by an adaptive filterseparately and the combined output of all the adaptive filtersis used to drive a loudspeaker. The error microphone shown isused as an error sensing device to detect the noise differencebetween the primary noise and the secondary noise producedby the loudspeaker and to convert this differential noise into anelectrical error signal which is used to adjust the adaptive filterweights. The weight adjustment is done iteratively so that in thesteady state, the loudspeaker produces an output that has thesame magnitude but opposite phase vis-a-vis the primary noise,thus resulting in ideal noise cancellation. Under such situation,the adaptive filters are seen to model each noise source exactly.

    The adaptive filter algorithm analyzed in this paper for multi-tonal noise cancellation purpose as explained above is the leastmean square (LMS) algorithm [6], which is well known for itssimplicity, robustness, and excellent numerical error character-istics. Recently, for the special case of two independent noisesources (i.e., ), a twin-reference complex LMS algorithmhas been analyzed for convergence in [3]. This analysis firstconsiders the 2 2 deterministic input correlation matrix andusing its eigenvectors, constructs a product matrix that turns outto be time invariant. The eigenvalues of the product matrix arethen evaluated explicitly as functions of the algorithm step sizeand convergence conditions on the step size are established byrestricting the magnitude of each eigenvalue to lie within one.Such an approach, however, cannot be generalized to the caseof an arbitrary number of noise sources, since for higher orders,the eigenvalues become intractably complicated functions of thestep size and the noise frequencies.

    In this paper, we present an alternative approach to the con-vergence analysis of a multireference complex LMS algorithmthat processes reference signals where can be any integer.The analysis, like [3], is a deterministic one and to the best ofour knowledge, is the first attempt to determine the convergencecondition for a general multireference active noise controller.For this, first the eigenstructure of the input correlation matrixis examined to determine its general form. Next, using this, itis shown that for the multireference case too, the productmatrix is a time invariant one, even though the input correla-tion matrix in this case does not have a unique eigen decompo-sition unlike the twin-reference problem. The properties of theproduct matrix are then explored further to show finally that thenorm of the weight error vector approaches zero with time, ifand only if the step size is chosen from within certain range. Anoptimal step-size for fastest convergence is then worked out byminimizing the energy of the error sequence which is seen to

    1063-6676/$20.00 2005 IEEE

    Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on June 24, 2009 at 04:34 from IEEE Xplore. Restrictions apply.

  • CHAKRABORTY AND SAKAI: CONVERGENCE ANALYSIS OF A COMPLEX LMS ALGORITHM 287

    be independent of the tonal frequencies and dependent only onthe step-size and the number of tonal components . The claimsregarding convergence condition and optimal step size are thenverified by simulation studies.

    The paper, like [3], however, assumes the secondary path be-tween the loudspeaker and the error microphone to have unitygain and zero phase shiftan ideal assumption that may notalways be met exactly in practice, specially when the separa-tion between the loudspeaker and the error microphone is largeenough to give rise to an acoustic path with nonnegligible prop-agation delay and also when the frequency response of the loud-speaker, the power amplifier and the anti-aliasing filter have dis-torting influence on the tonal components. In such cases, mod-ifications in the LMS algorithm are required to take into ac-count the nonideal nature of the secondary path, resulting in theusage of the so-called filtered-x LMS algorithm ([1], [2], [8])rather than the standard LMS algorithm. A deterministic con-vergence analysis for the filtered-x LMS algorithm is, however,much more complicated than the one presented above and hasnot been addressed in this paper.

    The organization of this paper is as follows : Section IIpresents the multiple reference complex LMS algorithm andSection III provides its convergence analysis in detail. Simula-tion studies and concluding remarks on the salient aspects of ourtreatment and observation are given in Section IV. Throughoutthe paper, matrices and vectors are denoted, respectively, byboldfaced uppercase and lowercase letters, while characterswhich are not boldfaced denote scalars. Also, , , andare used to indicate simple transposition, Hermitian transposi-tion and complex conjugation, respectively, and denotesthe norm of the vector involved.

    II. MULTIPLE REFERENCE LMS ALGORITHM

    The filtering processes involved in the multiple reference ac-tive noise canceller of Fig. 1 are shown in Fig. 2, where a set of

    reference signals , ,are filtered by complex valued, single tap adaptive filters si-multaneously. The reference signal , is gen-erated from the speed information for the th rotating machine,usually provided by a tachometer. The frequency is givenby , , where is the sampling fre-quency adopted and is the analog frequency given in Hertz.The noise generated by, say, the th rotating machine is modeledas being produced by a filter driven by the single tone

    . In other words, the electrical equivalent of the overallnoise generated is given by . Theoutput from the adaptive filters are added to form the signal

    . Then, assuming the overall transfer function between theloudspeaker and the error microphone, including power ampli-fiers, analog anti-aliasing filters and sample and hold to be unity[3], is subtracted from to generate the error signal

    which provides the discrete time, electrical equivalent ofthe error microphone output. The error signal is used to adaptthe filter weights by the so-called complex LMS algorithm [6].An underlying assumption of the present treatment is that thefrequencies are distinct, i.e., for . Otherwise, the

    Fig. 1. Physical model of a multireference active noise controller forcancelling multitonal acoustic noise.

    Fig. 2. Adaptive filtering involved in the multireference acoustic noisecontroller.

    problem becomes degenerate since in a situation wherefrequencies are same, no more thanfilters will be necessary to cancel the effect of the noise. Also,similar to [3], we assume without loss of generallity that thephase of each reference signal is zero and amplitude normalizedto unity.

    Define the input vector and the filter weight vectorat the th index as

    and . We then haveand . The

    complex LMS algorithm is formulated by first considering theinstantaneous squared error function , which, inthe current context, is given by

    (1)

    Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on June 24, 2009 at 04:34 from IEEE Xplore. Restrictions apply.

  • 288 IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 13, NO. 2, MARCH 2005

    Then, denoting by the complex gradient vector ob-tained by differentiating w.r.t. each tap weight, the weightsare updated by following a steepest descent search procedure, asgiven by

    (2)

    where is the so-called step-size. The gradientis given by

    . Substituting in (2), weobtain the multiple reference complex LMS algorithm as

    (3)

    The step-size is to be chosen appropriately for convergence ofthe above iterative procedure. In Section III, we show that in thecontext of the active noise controller with multiple, determin-istic reference input as described above, a necessary and suf-ficient condition for the corresponding LMS algorithm to con-verge is given by : , or, equivalently,

    , implying that should be chosen to satisfy : .

    III. CONVERGENCE ANALYSIS OF THE MULTIPLE REFERENCELMS ALGORITHM

    We define the optimal weight vector as. The noise signal is then given

    as . Ideally, we would want to convergeto in some appropriate sense. It is easy to see that if at anyindex , we have , the algorithm converges com-pletely, meaning and for all . Con-ventionally, the input to the LMS algorithm as well as the de-sired response are random processes and under such case, con-vergence is reached in mean, i.e., for appropriate choice of thestep size, we have as . This, in turn, re-quires the so-called independence assumption requiring sta-tistical independence between and . However, for theactive noise controller being considered here, each input is a de-terministic signal of a specific type, namely, complex sinusoidof known frequency. Similarly, the desired response is a de-terministic signal, being generated by a set of LTI filters actingon deterministic input. Under such case, it is possible to carryout a deterministic convergence analysis, without requiring theindependence assumption and by showing that for appropriatechoice of , actually approaches zero as , where

    is the weight error vector at the th index and is given by.

    Substituting in (3) by , then re-placing by and noting that ,it is easy to obtain

    (4)

    where

    (5)

    Applying the recursion in (4) repetitively backward tilland denoting by the initial weight error vector, we have

    (6)

    Next, we consider the eigen decomposition of the Hermitianmatrix . Note that the eigen subspaces of and

    are same. Further, the matrix is a rankone matrix with two distinct eigenvalues, one of multiplicityone given by and the other of multiplicity givenby zero. The eigen subspace corresponding to the eigenvalue

    is the range space of and is spannedby , whereas the eigen subspace corresponding to zeroeigenvalue is the null space of and has dimension

    . Since the matrix is Hermitian, the null space andthe range space are mutually orthogonal. Denoting by ,

    , a set of orthonormal vectors spanningthe null space of and by the unit norm vector

    , we express as

    (7)where and

    Substituting (7) in the matrix product in (6), we can write

    (8)Next we show that though the matrix is dependent on thetime index , the product matrix is timeinvariant. For this, we first make the following observation.

    Theorem 1: Any eigenvector of correspondingto eigenvalue 1 is of the form :

    , .

    Proof: The proof of the theorem follows easily by con-sidering a set of linearly independent vectors ,

    , wherewith constituting the th element. Clearly, eachis orthogonal to and thus the set ,forms a basis for the null space of . Any vectorbelonging to the null space of , or, equiva-lently, to the eigen subspace of corresponding toeigenvalue 1 is given by a linear combination of the vec-tors , and, thus, is of the form :

    . Hence proved.

    Using theorem 1, we denote , as: , with .

    Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on June 24, 2009 at 04:34 from IEEE Xplore. Restrictions apply.

  • CHAKRABORTY AND SAKAI: CONVERGENCE ANALYSIS OF A COMPLEX LMS ALGORITHM 289

    Then, the matrix can be written as ,where

    (9)

    and

    (10)

    It is then possible to observe the following: The matrix is unitary, i.e.,

    for all . Since both and are unitary, the matrix is

    also unitary, i.e., . The matrix can be written as :

    ,

    where . Clearly, does not depend onthe time index .

    The matrix is unitary, i.e., .We now use the above results to prove convergence of the

    algorithm and to establish convergence condition. For this, wefirst prove the following.

    Theorem 2: For any nonzero , the last el-ements (i.e., the th entries) of the following vectors :

    cannot be zero simultaneously.Proof: We prove by contradiction. Assume that the th

    entry of each of the vectors : is zero. Now,for , we have ,where . Also note that eachentry of the last row of is given by . Therefore, if the

    th element of is zero, we have , or, equiva-lently

    Since this is satisfied simultaneously for , we canfurther write

    (11)

    where is a vector of zeros and

    Note that the matrix can be written as , where

    The matrix is a full rank Vandermonde matrix [7] since, asper our initial assumption, with

    , . From this and also from the fact that bothand are unitary, it follows that the only solution for

    (11) is given by , which is a contradiction. Henceproved.

    Note : Though the above proof uses the fact that the entries inthe last column of are all equal , this is, however,not necessary and the only condition required is that no elementin the last column of should be zero. In fact, (11) can bewritten in the general case as , where isa diagonal matrix with th diagonal entry given by ,

    . Clearly, is full rank if , .We use the above theorem to prove the following important

    result.Theorem 3: Given and any nonzero vector

    , for .Proof: First, we consider the vector . De-

    noting the th entry of by , it is easily seen thatif , then since

    . As is unitary, meaning , this im-plies that . On the other hand, if , then

    and thus . Conversely, it also followsthat if , then . Combiningthe two cases, we observe which can be gener-alized to include . Noting that

    , we can write

    (12)We now prove the theorem by contradiction. Since, for ,

    , it is sufficient to prove that .Assume that there exists at least one nonzero vectorso that . From (12), it then follows that

    (13)First, we consider . As shown above, this impliesthat and therefore, . Next, con-sider , which can now be written as

    . Using the previous logic, this implies thatand thus, . Proceeding like this up to

    , we obtain, ,which contradicts Theorem 2 since is nonzero. Hence proved.

    We now show that given to be any nonzero vector,if . First, if for any finite

    integer , , is a zero vector under the condition, then for all too, is a zero vector and

    the statement is proved trivially. Such a situation arises whenis not full rank, meaning and is such a vector thatfor some integer , , lies in the null space of

    Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on June 24, 2009 at 04:34 from IEEE Xplore. Restrictions apply.

  • 290 IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 13, NO. 2, MARCH 2005

    Fig. 3. Learning curves for (a) = 0:02. (b) = 0:05. (c) = 0:1, and (d) = 0:25.

    . To prove the convergence in the general case, we discountthis possibility and assume that for all integer , , isnonzero, or, equivalently, we assume that either , or,if , then does not belong to the null space of

    for any integer . It then follows from Theorem 3 that. Since norm of a vector is non-

    negative, this along with (12) implies that .Also, it is easy to verify that if then, in gen-eral,and for , , meaning that in suchcase, . [When ,

    for all ]. In other words, the con-dition : is both necessary and sufficient forconvergence of the sequence to zero.

    Substituting (8) in (6) and using the definition of the matrix, we then write

    (14)Since is unitary, we have , where

    . In the general case, is a nonzero vectorand therefore, from above, if and only if

    , or, equivalently, . [When is a zerovector, is zero trivially, which occurs whenis rank deficient, meaning and belongsto the null space of .]

    Finally, using the fact that for, we make an interesting observation on

    as given in the theorem below.Theorem 4: Given ,

    . In other words, is inde-pendent of the frequencies , and depends onlyon and .

    Proof: First note that the error signal can bewritten as

    , as .

    Next, from (4), we write,. Substituting by as given

    by (5) and using the above expression of , we can write

    Applying the recursion on repetitively backward till, we obtain

    By letting approach infinity on both sides and recalling that for, , it is then straightfor-

    ward to write . Henceproved.

    Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on June 24, 2009 at 04:34 from IEEE Xplore. Restrictions apply.

  • CHAKRABORTY AND SAKAI: CONVERGENCE ANALYSIS OF A COMPLEX LMS ALGORITHM 291

    Fig. 4. Learning curves for (a) = 0:35. (b) = 0:42. (c) = 0:46. (d) = 0:5.

    The above theorem can be used to determine the optimalstep size for fastest convergence as discussed in the followingsection.

    IV. SIMULATION STUDIES, DISCUSSION, AND CONCLUSIONS

    In this paper, we have considered an active noise controllerfor cancelling tonal acoustic noise and determined the conver-gence condition for the multireference complex LMS algorithmneeded to train the controller with no restriction on the numberof reference signals. Interestingly, the convergence condition :

    derived for unit power complex sinusoid inputis identical to the stochastic case where unit variance randomsignals are filtered by a set of single tap LMS based adaptivefilters to estimate a desired random process. However, the de-terministic convergence analysis ensures that in the absence ofmodeling error and any observation noise, the magnitude of theerror signal actually approaches zero, while, in the stochasticcase, the filter coefficients converge to the optimal weights onlyin the mean and thus a residual mean square error (mse) is ob-served in the steady state mse. It is also interesting to observethat a necessary and sufficient condition for

    for any is that the spectral radius of , denoted by andgiven by ( : eigenvalue of with highest magni-tude) should satisfy [7]. Comparing the two conver-

    gence conditions, both being necessary as well as sufficient, wecan then infer that , iff , where denotesthe th eigenvalue of , .

    The reference signals for the LMS algorithm are assumed tobe of the form of unit magnitude complex sinusoid, whereas inreal life, a tonal signal is either a cosinusoidal or a sinusoidalfunction of the time index . However, the tonal component

    generated by one rotating machine can be modeledas being generated by two systems, having input andand gain of and respectively at frequency

    . In our simulation studies, we considered two tonal noisefrequencies of 125 Hz and 250 Hz and a sampling frequencyof 1 kHz which gives rise to and . Thediscrete time versions of the tonal acoustic noise consideredwere of the form : and .As per the formulation adopted in this paper, we then have

    with andthe optimal weight vector .The total tonal noise generated is given by and zeromean, additive white Gaussian noise of varianceis added to it to constitute . The multireference LMSalgorithm is run for increasing values of in the range :

    and is plotted vis-a-vis to obtainthe learning curves, as shown in Fig. 3 and 4. Additionally,we also plot in dB (i.e., ) against

    Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on June 24, 2009 at 04:34 from IEEE Xplore. Restrictions apply.

  • 292 IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 13, NO. 2, MARCH 2005

    Fig. 5. Learning curves in dB for (a) = 0:05. (b) = 0:1.

    in Fig. 5 for and to show the presence ofresidual error in the steady state contributed by the additivewhite Gaussian noise.

    An inspection of Fig. 3 and 4 reveals that there is certain rangeof values of that gives rise to fastest convergence and asdeviates further and further from this range, the rate of conver-gence deteriorates. Thus, in Fig. 3, it is seen that the conver-gence speed improves as progressively takes the followingvalues : 0.02, 0.05, 0.1 and 0.25. On the other hand, Fig. 4shows that as increases further and takes the values : 0.35, 0.42and 0.46, the rate of convergence decreases and for ,the algorithm does not converge. An explanation for this phe-nomenon may be provided by recalling, from Theorem 4, thatthe energy of the error sequence , under convergence con-dition, is given by . It is easy to verifythat the function has a unique minima at

    . Since, every converging is a decreasing functionof , the minimum energy error sequence, for all practical pur-poses, will have speed of convergence higher than that of othererror sequences and thus provides the optimal stepsize for fastest convergence. This explains how, in our simula-tion studies, fastest convergence takes place atas shown in Fig. 3(d) and how the convergence becomes sloweras deviates from this optimal value.

    REFERENCES[1] S. M. Kuo and D. R. Morgan, Active Noise Control Systems. New

    York: Wiley, 1996.[2] C. R. Fuller, S. J. Elliot, and P. A. Nelson, Active Control of Vibra-

    tions. New York: Academic, 1996.[3] S. Johansson, S. Nordebo, and I. Claesson, Convergence analysis of

    a twin-reference complex least-mean-squares algorithm, IEEE Trans.Speech Audio Process., vol. 10, no. 3, pp. 213221, May 2002.

    [4] S. Johansson, I Claesson, S. Nordebo, and P. Sjsten, Evaluationof multiple reference active noice control algorithms on Dornier 328aircraft data, IEEE Trans. Speech Audio Process., vol. 7, no. 4, pp.473477, Jul. 1999.

    [5] S. J. Elliot, P. A. Nelson, I. M. Stothers, and C. C. Boucher, In-flightexperiments on the active control of propeller-induced cabin noise, J.Sound Vibr., vol. 140, no. 2, pp. 219238, 1990.

    [6] S. Haykin, Adaptive Filter Theory. Englewood Cliffs, NJ: Prentice-Hall, 1986.

    [7] R. A. Horn and C. R. Johnson, Matrix Analysis. Cambridge, MA:Univ. Press, 1985.

    [8] B. Widrow and S. D. Stearns, Adaptive Signal Processing. EnglewoodCliffs, NJ: Prentice-Hall, 1985.

    Mrityunjoy Chakraborty (M94SM99) receivedthe B.E. degree in electronics and telecommunica-tion engineering from Jadavpur University, Calcutta,India, in 1983 and the M.Tech. and Ph.D. degreesin electrical engineering from the Indian Institute ofTechnology, Kanpur and Delhi, India, in 1985 and1994, respectively.

    He joined the Indian Institute of Technology,Kharagpur, as a Lecturer in 1994, and is now a Pro-fessor of electronics and electrical communicationsengineering. He visited the National University of

    Singapore from 1998 to 1999 and Kyoto University, Kyoto, Japan, in 2002 and2003 under a Kyoto University foundation fellowship and a JSPS fellowship.His research interests include digital and adaptive signal processing, VLSIarchitectures for signal processing, and signal processing applications inwireless communications.

    Hideaki Sakai (M78SM02) received the B.E. andDr. Eng. degrees in applied mathematics and physicsfrom Kyoto University, Kyoto, Japan, in 1972 and1981, respectively.

    From 1975 to 1978, he was with TokushimaUniversity, Tokushima, Japan. He is currently aProfessor in the Department of Systems Science,Graduate School of lnformatics, Kyoto University,Kyoto, Japan. He spent six months from 1987 to1988 at Stanford University, Stanford, CA, as aVisiting Scholar. His research interests are in the

    area of adaptive signal processing and time series analysis. He has been anAssociate Editor of the IEICE Trans. Fundamentals, from 1996 to 2000 and ison the editorial board of EURASIP Journal of Applied Signal Processing.

    Dr. Sakai was an Associate Editor of the IEEE TRANSACTIONS ON SIGNALPROCESSING from 1999 to 2001.

    Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on June 24, 2009 at 04:34 from IEEE Xplore. Restrictions apply.

    tocConvergence Analysis of a Complex LMS Algorithm With Tonal ReferMrityunjoy Chakraborty, Senior Member, IEEE, and Hideaki Sakai, I. I NTRODUCTIONII. M ULTIPLE R EFERENCE LMS A LGORITHMFig.1. Physical model of a multireference active noise controllFig.2. Adaptive filtering involved in the multireference acoust

    III. C ONVERGENCE A NALYSIS OF THE M ULTIPLE R EFERENCE LMS A LGTheorem 1: Any eigenvector of ${\bf G}(n)$ corresponding to eigeProof: The proof of the theorem follows easily by considering a

    Theorem 2: For any nonzero ${\bf z} \in {\bf C}^{p \times 1}$, tProof: We prove by contradiction. Assume that the $p$ th entry o

    Theorem 3: Given $\vert 1-\mu p\vert < 1$ and any nonzero vectorProof: First, we consider the vector ${\bf B}{\bf x}={\bf D}{\bf

    Fig.3. Learning curves for (a) $\mu =0.02$ . (b) $\mu =0.05$ . Theorem 4: Given $0< \mu < {{ 2}/ { p}}$, $\sum _{k=0}^{\infty }Proof: First note that the error signal $e(n)$ can be written as

    Fig.4. Learning curves for (a) $\mu =0.35$ . (b) $\mu =0.42$ . IV. S IMULATION S TUDIES, D ISCUSSION, AND C ONCLUSIONS

    Fig.5. Learning curves in dB for (a) $\mu =0.05$ . (b) $\mu =0.S. M. Kuo and D. R. Morgan, Active Noise Control Systems . New YC. R. Fuller, S. J. Elliot, and P. A. Nelson, Active Control of S. Johansson, S. Nordebo, and I. Claesson, Convergence analysis S. Johansson, I Claesson, S. Nordebo, and P. Sjsten, EvaluationS. J. Elliot, P. A. Nelson, I. M. Stothers, and C. C. Boucher, IS. Haykin, Adaptive Filter Theory . Englewood Cliffs, NJ: PrentiR. A. Horn and C. R. Johnson, Matrix Analysis . Cambridge, MA: UB. Widrow and S. D. Stearns, Adaptive Signal Processing . Englew