Low-Delay Signal Processing for Digital Hearing Aids

12
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 4, MAY 2011 699 Low-Delay Signal Processing for Digital Hearing Aids Ashutosh Pandey and V. John Mathews, Fellow, IEEE Abstract—Digital signal processing in modern hearing aids is typically performed in a subband or transform domain that introduces analysis–synthesis delays in the forward path. Long forward-path delays are not desirable because the processed sound combines with the unprocessed sound that arrives at the cochlea through the vent and changes the sound quality. Nonethe- less, subband domain processing for digital hearing aids is the most popular choice for hearing aids because of the associated computational simplicity. In this paper, we present an alternative digital hearing aid structure with low-delay characteristics. The central idea in the paper is a low-delay spectral gain shaping method (SGSM) that employs parallel parametric equalization (EQ) filters. The low-delay SGSM provides frequency-dependent amplification for hearing loss compensation with low forward path delays and performs dynamic signal processing such as noise suppression and dynamic range compression. Parameters of the parametric EQ filters and associated gain values are selected using a least-squares approach to obtain the desired spectral response. The low-delay structure also employs an off-the-forward-path, frequency domain adaptive filter to perform acoustic feedback cancellation. Extensive MATLAB simulations and subjective evaluations of the results indicate that the method of this paper is competent with a state-of-the-art digital hearing aid system, but exhibits much smaller forward-path delays. Index Terms—Adaptive filters, hearing aids, low-delay. I. INTRODUCTION H EARING aids provide frequency dependent amplifica- tion of incoming signals to compensate for hearing loss in hearing-impaired patients. The maximum amount of sound reinforcement in a hearing aid is limited by acoustical coupling between the speaker and the microphones in the hearing aid. An adaptive filter is often used to continuously estimate the acoustic feedback and cancel it in hearing aids. Fig. 1 shows the block diagram of a typical digital hearing aid with a single microphone where thin and thick lines indicate scalar and vector quantities, respectively. It is common to implement the system in a transform or subband domain [1] where each band operates at a slower rate than the full band rate. Throughout the paper, we assume discrete time signal processing, with and de- Manuscript received January 05, 2010; revised May 17, 2010; accepted June 28, 2010. Date of publication July 19, 2010; date of current version February 14, 2011. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. James Johnston. A. Pandey was with the Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT 84112 USA. He is now with ClearOne Communications, Inc., Salt Lake City, UT 84116 USA (e-mail: pandey@eng. utah.edu). V. John Mathews is with the Department of Electrical and Computer Engi- neering, University of Utah, Salt Lake City, UT 84112 USA. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TASL.2010.2060193 Fig. 1. Block diagram showing the signal processing components of a typical digital hearing aid. noting the time indices for the subband domain and fullband sig- nals, respectively. The signal transformation results in spectrally distributed signals at a decimated rate. Digital hearing aids use discrete signal samples of the microphone signal and the speaker signal to perform the necessary signal processing for hearing-impaired listeners. Such processes include adaptive feedback cancellation, noise suppression, hearing loss compen- sation and dynamic range compression (DRC) applied on the spectrally distributed signals. In general, a hearing aid performs compensation of hearing loss, noise suppression to reduce am- bient noise in the incoming signal and DRC to provide a com- fortable listening experience in the forward path [2]. The gain function may vary with time and is designed to perform all three operations. We refer to DRC and noise suppression as dynamic signal processing in this paper. In Fig. 1, the delay is used to adjust the bias in the adaptive filter estimate. Larger choices of improve adaptive filter estimate of the acoustic feedback and may provide higher levels of added stable gain (ASG) 1 [3]–[5]. The broadband variable gain function can be used to adjust the overall output sound level in changing acoustic environments and is often available to hearing aid users as a volume control on the face plate of the hearing aid. Processing the signals in transform or subband domain re- sults in three advantages: easy adjustment of the gain value in the th band, faster adaptive filter convergence, and overall computational savings [6]. However, transform domain signal processing introduces an undesirable broadband delay in the forward path [2], [7], [8]. This is in addition to the pro- cessing delays such as analog-to-digital converter (ADC) and digital-to-analog converter (DAC) delays that are unavoidable in any realization of the hearing aid and the delay described earlier. Typically, ADC/DAC adds a delay 0.4–2 ms of delay in hearing aids [9]. In practice, these time delays cause col- oration effects when a hearing-aid user is talking. The talker’s own voice reaches the cochlea with minimal delay via bone con- duction and through the hearing-aid vent and interacts with the 1 ASG is the extra amplification possible in a hearing aid with feedback can- cellation over the case with no feedback suppression. 1558-7916/$26.00 © 2010 IEEE

Transcript of Low-Delay Signal Processing for Digital Hearing Aids

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 4, MAY 2011 699

Low-Delay Signal Processing forDigital Hearing Aids

Ashutosh Pandey and V. John Mathews, Fellow, IEEE

Abstract—Digital signal processing in modern hearing aidsis typically performed in a subband or transform domain thatintroduces analysis–synthesis delays in the forward path. Longforward-path delays are not desirable because the processedsound combines with the unprocessed sound that arrives at thecochlea through the vent and changes the sound quality. Nonethe-less, subband domain processing for digital hearing aids is themost popular choice for hearing aids because of the associatedcomputational simplicity. In this paper, we present an alternativedigital hearing aid structure with low-delay characteristics. Thecentral idea in the paper is a low-delay spectral gain shapingmethod (SGSM) that employs parallel parametric equalization(EQ) filters. The low-delay SGSM provides frequency-dependentamplification for hearing loss compensation with low forwardpath delays and performs dynamic signal processing such as noisesuppression and dynamic range compression. Parameters of theparametric EQ filters and associated gain values are selected usinga least-squares approach to obtain the desired spectral response.The low-delay structure also employs an off-the-forward-path,frequency domain adaptive filter to perform acoustic feedbackcancellation. Extensive MATLAB simulations and subjectiveevaluations of the results indicate that the method of this paper iscompetent with a state-of-the-art digital hearing aid system, butexhibits much smaller forward-path delays.

Index Terms—Adaptive filters, hearing aids, low-delay.

I. INTRODUCTION

H EARING aids provide frequency dependent amplifica-tion of incoming signals to compensate for hearing loss

in hearing-impaired patients. The maximum amount of soundreinforcement in a hearing aid is limited by acoustical couplingbetween the speaker and the microphones in the hearing aid.An adaptive filter is often used to continuously estimate theacoustic feedback and cancel it in hearing aids. Fig. 1 showsthe block diagram of a typical digital hearing aid with a singlemicrophone where thin and thick lines indicate scalar and vectorquantities, respectively. It is common to implement the systemin a transform or subband domain [1] where each band operatesat a slower rate than the full band rate. Throughout the paper,we assume discrete time signal processing, with and de-

Manuscript received January 05, 2010; revised May 17, 2010; accepted June28, 2010. Date of publication July 19, 2010; date of current version February14, 2011. The associate editor coordinating the review of this manuscript andapproving it for publication was Dr. James Johnston.

A. Pandey was with the Department of Electrical and Computer Engineering,University of Utah, Salt Lake City, UT 84112 USA. He is now with ClearOneCommunications, Inc., Salt Lake City, UT 84116 USA (e-mail: [email protected]).

V. John Mathews is with the Department of Electrical and Computer Engi-neering, University of Utah, Salt Lake City, UT 84112 USA.

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TASL.2010.2060193

Fig. 1. Block diagram showing the signal processing components of a typicaldigital hearing aid.

noting the time indices for the subband domain and fullband sig-nals, respectively. The signal transformation results in spectrallydistributed signals at a decimated rate. Digital hearing aids usediscrete signal samples of the microphone signal and thespeaker signal to perform the necessary signal processingfor hearing-impaired listeners. Such processes include adaptivefeedback cancellation, noise suppression, hearing loss compen-sation and dynamic range compression (DRC) applied on thespectrally distributed signals. In general, a hearing aid performscompensation of hearing loss, noise suppression to reduce am-bient noise in the incoming signal and DRC to provide a com-fortable listening experience in the forward path [2]. The gainfunction may vary with time and is designed to performall three operations. We refer to DRC and noise suppression asdynamic signal processing in this paper. In Fig. 1, the delayis used to adjust the bias in the adaptive filter estimate. Largerchoices of improve adaptive filter estimate of the acousticfeedback and may provide higher levels of added stable gain(ASG)1 [3]–[5]. The broadband variable gain function canbe used to adjust the overall output sound level in changingacoustic environments and is often available to hearing aid usersas a volume control on the face plate of the hearing aid.

Processing the signals in transform or subband domain re-sults in three advantages: easy adjustment of the gain value

in the th band, faster adaptive filter convergence, andoverall computational savings [6]. However, transform domainsignal processing introduces an undesirable broadband delay inthe forward path [2], [7], [8]. This is in addition to the pro-cessing delays such as analog-to-digital converter (ADC) anddigital-to-analog converter (DAC) delays that are unavoidablein any realization of the hearing aid and the delay describedearlier. Typically, ADC/DAC adds a delay 0.4–2 ms of delayin hearing aids [9]. In practice, these time delays cause col-oration effects when a hearing-aid user is talking. The talker’sown voice reaches the cochlea with minimal delay via bone con-duction and through the hearing-aid vent and interacts with the

1ASG is the extra amplification possible in a hearing aid with feedback can-cellation over the case with no feedback suppression.

1558-7916/$26.00 © 2010 IEEE

700 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 4, MAY 2011

delayed and amplified sound produced by the hearing aid signalprocessing to produce the coloration effect. Broadband delaysas short as 4 to 8 ms have been reported to be detectable andto degrade the perceived output sound quality [10]–[12]. Thispaper presents a digital hearing aid design that retains the bene-fits of transform domain signal processing with a small forwardpath delay.

Our method accomplishes this objective by combining a low-delay spectral gain shaping method (SGSM) with a low-delayadaptive feedback cancellation system in the hearing aid. Thelow-delay SGSM uses a set of parallel second-order parametricequalization (EQ) infinite impulse response (IIR) filters [13],[14] to obtain the desired response. The desired response de-pends on the prescribed hearing aid gain to compensate for thehearing loss, amount of noise suppression, and dynamic rangecompression. The parametric EQ filters are known for their lowgroup delays and are widely used in professional audio systemsfor audio equalization [15]–[17]. The coefficients of a para-metric EQ filter are selected in this paper to match propertiesof human auditory system that are relevant to hearing aid de-sign [18].

The hearing aid of this paper employs an off-the-forward-path adaptive feedback cancellation system in the frequencydomain similar to the system proposed by Morgan et al. [19]for acoustic echo cancellation. The feedback canceller em-ploys a two stage process. First, the adaptive filter estimatesthe feedback path using a frequency-domain implementationof the filtered-X LMS algorithm [20]. Second, the estimatedcoefficients are used to cancel the feedback signal in the timedomain. Separation of the estimation of the feedback path andcancellation of the feedback allows the system to use delays inthe adaptation process without inserting delays in the cancel-lation process.

The rest of the paper is organized as follows. Section II pro-vides a description of a classical transform domain hearing aidsystem, against which the low-delay system of this paper is eval-uated. The low-delay structure is described in Section III. InSection IV, the performance of the hearing aid system of thispaper is evaluated and compared with the structure described inSection II using MATLAB simulations. This section also con-tains design discussions and the results of subjective evalua-tions of the two systems. We make the concluding remarks inSection V.

II. CLASSICAL TRANSFORM DOMAIN SIGNAL

PROCESSING FOR HEARING AIDS

For performance comparisons, we will use a subband-basedsystem employing subbands, which are created with over-sampled generalized discrete Fourier transform (GDFT) filterbanks [7], [21]. Each subband component operates attimes lower sampling rate than the full sampling rate of thesystem. The GDFT filter banks are known for their computa-tionally efficient implementation via FFT [5], [22], [23]. Let

and denote subband domain signals in band attime for the microphone and speaker, respectively. Other sig-nals follow a similar notation in the subband domain for thispaper. Gain compensation for hearing loss, noise suppression,

TABLE IUPDATE EQUATIONS FOR A SUBBAND-BASED ADAPTIVE

FEEDBACK CANCELLATION SYSTEM

Fig. 2. Block diagram of the low delay structure.

dynamic range compression, and adaptive feedback cancella-tion are all done in the subband domain. For simplicity of pre-sentation, the DRC is implemented as an output limiter that clipsthe output if the output signal in a subband is larger than a prede-termined threshold. Noise suppression is achieved by adjustingthe gain function in bands with apostfilter based on the Weiner filter weighting rule with a priorisignal-to-noise ratio estimates [24]. The details of the postfilteris provided in the Appendix. Finally, adaptive feedback cancel-lation is done with a normalized adaptive least-mean-square al-gorithm (NLMS) algorithm in each subband. Subband adaptivefilters together model a full band feedback path that is approx-imated with a linear impulse response with coefficients. Let

represent the adaptive filter coefficient vector for the thsubband and contain coefficients, where theoperation returns the integer part of the real number .If is not a multiple of , modeling the adaptive filter withcoefficients in subband is not an exact but close approximationof the impulse response with coefficients in full band. Theupdate equations for the NLMS adaptation in the th subbandfor the th subband sample to estimate the feedback path aregiven in Table I. In Table I, is a small positive constantthat controls the adaptation speed of the system and is anothersmall positive constant designed to avoid a divide-by-zero [6].

III. LOW-DELAY HEARING AID STRUCTURE

The block diagram of the low-delay digital hearing aid systemis shown in Fig. 2 where vector quantities are drawn in thickerlines. In the low-delay structure, an optional delay is in-cluded in the forward path. The extra delay may help increasethe added stable gain for a hearing-aid patient if needed. The

PANDEY AND MATHEWS: LOW-DELAY SIGNAL PROCESSING FOR DIGITAL HEARING AIDS 701

low-delay structure performs some of the signal processing inthe time domain on a sample-by-sample basis and others in thefrequency domain on a block-by-block basis. In particular, thespectral gain shaping and estimation of the whitening filter co-efficients are done on sample-by-sample basis. The low-delay spectral gain shaping incorporates both dynamic signalprocessing and hearing loss compensation. The feedback can-cellation is performed using an off-the-forward-path, filtered-XLMS adaptive filter [20] in the frequency domain. The nota-tions used in this paper for the frequency domain block basedsignal processing algorithm are as follows. Let definedas

(1)

represent the th frame of the input signal . In (1),is the frame size in number of samples and represents theshift between successive frames. Let the discrete Fourier trans-form (DFT) of the signal be , where repre-sents the frequency bin radians/sample. We de-note the vector of all frequency components in frame

by . With the above notation, details about the low-delayhearing aid signal processing is presented the following sec-tions. Section III-A provides details of the low-delay spectralgain shaping method. The adaptive feedback cancellation asso-ciated with the low-delay structure is described in Section III-B.

A. Low-Delay Spectral Gain Shaping

Even though the estimated hearing loss profile measured atthe time of hearing aid fitting does not change at least until thepatient is evaluated again, our system performs spectral gainshaping on a block-by-block basis. This is because, in additionto hearing loss compensation, the spectral gain shaping systemalso incorporates noise suppression and dynamic range com-pression. Let represent the magnitude of the desiredspectral gain shaping function in any given block of the inputsignal. If is the noise suppression response, isgiven by

(2)

represents the dynamic compression operation andis the desired hearing loss compensation. Given

at different frequencies , thespectral gain shaping system consists of equalization filtersas shown in Fig. 3. The coefficients are selected

Fig. 3. Low-delay spectral gain shaping method with parallel equalizationfilters.

as a least-squares approximation of the desired overall spectralcharacteristics of the hearing aid. That is, the system selects thecoefficients such that the cost function

(3)

is minimized, where is the frequency responseof the th EQ filter in the parallel structure, ,

for , and. In the above equation, is

a minimum phase system whose magnitude response is sameas . This particular transfer function can be obtained bya spectral factorization of . Our implementation utilizesa computationally efficient algorithm employing the discreteHilbert transform as described in [25]. It is known that forall causal and stable systems that have the same magnituderesponse, the minimum phase system has the minimum groupdelay [26]. Therefore, matching to a minimum phase transferfunction reduces the overall group delay of the low-delay spec-tral gain shaping filter block shown in Fig. 3. In (3), adjuststhe average frequency response to match the desired spectralresponse. The least-squares solutions to the above problem isgiven by

(4)

where the solution vector are made up of the coefficientsshown in Fig. 3. The vector and the matrix

are shown in the equation at the bottom of the page.

and

......

......

...

......

......

...

702 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 4, MAY 2011

TABLE IICALCULATION OF THE COEFFICIENTS OF THE EQ FILTERS

The matrix in (4) can be determinedoffline once the parametric filters and the frequencies

are known. These parameters are typicallyspecified at the time of hearing aid fitting.

While a variety of equalization filters can be used for spec-tral gain shaping, this work employs second-order peaking fil-ters derived from analog peaking filters with transfer function[27], [28]

(5)

where represents the quality factor and is the filter gain atthe center frequency rad/s. This filter has gain close to 1at most frequencies and shows a boost or attenuation at frequen-cies in the vicinity of its center frequency. The maximum gain(when ) occurs at the center frequency and is given by

. Table II displays the computations involved in calculatingthe coefficients of the equivalent discrete-time filter when thecenter frequency is specified in the analog domain [13]. Here,discrete-time transfer function has the form

(6)

and denotes the sampling frequency and the conversionfrom analog to discrete-time was performed using the bilineartransformation.

In this paper, we designed the EQ filters in the spectral gainshaping system to match the auditory filters in the ear. An audi-tory filter refers to a band of frequencies that the human auditorysystem treats similarly during sound processing [1], [18]. Thefrequency band associated with each auditory filter is referredto as the critical band of that filter. Psychoacoustic research hasindicated that there are approximately 25 auditory filters in theaudible frequency range. Each equalization filter was selectedwith the center frequency to match the center of a critical band.Similarly, the quality factor for the th filter was selected ac-cording to

(7)

where is the center frequency of the th critical band andis the bandwidth of the th critical band. The value of

is chosen heuristically to keep the overall forward-path delaysmall and reasonably accurately match the spectral responsewith the gain values . In the experimental re-sults presented later in the paper, the EQ filters were selected tomatch the auditory filters and center frequencies were selectedto match the auditory filters of human with normal hearing. The

Fig. 4. Obtaining the whitening filter coefficients with a forward linearpredictor.

design can be extended to match the auditory filters of hearingimpaired patients in a similar manner; however, the auditoryfilter shapes must be estimated separately for each patient.

B. Adaptive Feedback Cancellation

In this section, we describe the adaptive feedback cancella-tion algorithm employed in the hearing aid for the low-delaystructure. The adaptive feedback cancellation is performed inthe frequency domain with the filtered signals andshown in Fig. 2. The filtered signals are used in the adapta-tion process to reduce the bias in the estimate of an adaptivefilter as suggested by Hellgren [20]. The filtered signals are cre-ated by filtering the full band signals with the whitening filter

[29]. The whitening filter coefficients are estimated withthe one-step forward linear predictor filter as shown inFig. 4. In the -domain, the whitening filter can be re-lated to the predictor filter as [6]. Thelinear predictor filter is an FIR filter of order , whosecoefficients are estimated by minimizing the forward predictionerror of the error signal .

The linear predictor filter attempts to estimate the time-varying auto-regressive model for the input signal of a hearingaid. The input signals in hearing aids are audio signals suchas speech/music that are highly nonstationary. Therefore, it isimperative that the linear predictor filter quickly adapts to thecorrect model as the input signal is varying. In this paper, weuse a recursive least-squares type algorithm which are knownfor their faster convergence compared to the gradient basedmethods at the cost of additional computational complexity.However, in this case, the additional computational complexityin obtaining the predictor coefficients is smaller comparedto the overall computational complexity in implementing thehearing aid system because the order of the predictor filter

is usually small in practice. Specifically, the linear pre-dictor coefficients are obtained using the weighted recursiveleast-squares (WRLS) algorithm with a variable forgettingfactor as proposed by Milosavljevic et al. [30].

The variable forgetting factor (VFF) is used to improvetracking capability of the WRLS algorithm which is in partic-ular helpful for nonstationary signals such as speech/music. Theforgetting factor is adjusted depending on the signal station-arity, using the modified generalized likelihood ratio algorithm(MGLR) [30]. The MGLR algorithm provides informationabout the signal stationarity by computing a discriminationfunction . The discrimination function signifies thespeed and the amount of change in the signal. At time , the

PANDEY AND MATHEWS: LOW-DELAY SIGNAL PROCESSING FOR DIGITAL HEARING AIDS 703

TABLE IIIFINDING LINEAR PREDICTOR COEFFICIENTS USING WRLS WITH VFF

discrimination function uses the last samples of asignal, being an integer, to indicate the signal stationarity as

(8)

where is the logarithmic function for the intervaldefined as

(9)

In (9), the variable is defined in Table III. If the valueof calculated by (8) becomes smaller than a value ,it is set to . Similarly, if the value of becomes largerthan a value , it is set to . Subsequently, the forget-ting factor is chosen so that when

; when and linear interpo-lation between them if . The limits onthe forgetting factor values are such that

. The complete algorithm for determining the linearpredictor coefficients with the VFF-WRLS at time is summa-rized in Table III. In Table III, is a matrix whichis recursively updated. The whitened signals andare used to estimate the feedback path with an adaptive filter inthe frequency domain using a block-based-least-mean-squaresalgorithm (FBLMS) [6]. The adaptive filter models the feedbackpath with an FIR filter containing coefficients. Furthermore,signals are segmented into point vectors with pointoverlap for adaptive filtering. The FBLMS algorithm for the thblock is summarized in Table IV. In Table IV, is a step-sizeparameter and the s are the power estimates of the sam-ples of the reference signal, s, for the adaptive filter inthe frequency domain. Table IV also lists the sample-by-sampleprocessing done in the forward path based on the adaptive filterupdate obtained from the block processing. We denote the adap-tive filter coefficient vector in time domain at time as .The variable is the vector representations of the adaptivefilter coefficients in the frequency domain.

TABLE IVDELAYLESS ADAPTIVE FEEDBACK CANCELLATION

IV. RESULTS AND DISCUSSION

This section presents results from MATLAB simulations ofthe hearing aid algorithms to demonstrate the performance ofthe low-delay approach and the subband based algorithm de-scribed in Section II. Both methods were evaluated in terms ofprocessing delays and output sound quality. The true feedbackpath was simulated using a 192-tap FIR filter in parallel witha homogeneous quadratic nonlinearity. The nonlinearity simu-lates the nonlinear distortions in the loudspeakers and A/D con-verters in a hearing aid system. The harmonic signal strengthwas 40 dB below that of the output of the linear component.Coefficients for the linear component of the feedback path wereobtained from measurements of an inside-the-ear (ITE) hearingaid. The ITE hearing aid consisted of two FG-3653 omnidi-rectional microphones and a receiver. The output of the mi-crophones and input to the receiver were available at the faceplate of the hearing aid as CS44 plugs. We used a standard EX-PRESSfit hearing aid programming cable to drive and accessmicrophones and the speaker of the hearing aid. The program-ming cable was connected to an interface board through an 8-pinmini DIN plug that provided required power to the program-ming cable and amplified the signals. The critical gain2 for theacoustic feedback model used in this paper was 36.4 dB. Thefeedback canceller employed a linear FIR system model with128 coefficients. This undermodeling attempts to capture thepractical situation where it is very difficult to exactly model thefeedback path in the system. The signal processing was doneat a sampling rate of Hz in all experiments in thissection.

2Critical gain refers to the maximum amplification for which the output signalquality is acceptable without feedback cancellation.

704 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 4, MAY 2011

The subband-based system performs various signal pro-cessing in a hearing aid system with a fixed delay knownas the analysis-synthesis filter bank (AS-FB) delay [22].The AS-FB delay depends on the prototype filter length ,spectral resolution for the filter bank and decimation ratio

. Generally, for linear-phase prototype filters commonly usedin hearing aids and employed in this paper, the AS-FB delayincreases as spectral resolution increases, i.e., increases.In this paper, four subband designs S1, S2, S3, and S4 withspectral resolutions of 500, 250, 125, and 62.5 Hz were usedto compare performance against the low-delay method. Param-eters of the subband designs S1, S2, S3, and S4 were ,

, (S1); , , (S2);, , (S3); and , ,(S4), respectively. A relatively low down-sampling

ratio was used in all designs to avoid aliasing and keep theanalysis-synthesis delay as small as possible for the subbanddesigns. The prototype filter coefficients were generated withthe least-squares design given in [7]. The AS-FB delays forS1, S2, S3, and S4 were 2, 4, 8, and 16 ms, respectively.

Unlike the subband method, delay in the low-delaysystem for a frame depends on the frequency dependent groupdelay 3 as

In the above equation, we assume without lose of generality thatthe phase of the low delay system at is 0. Moreover, forthe low delay method, group delay in a frame depends on themagnitude of the spectral gain shaping function and isdifferent for each frame. Since the maximum group delay ina frame is always larger than or equal to for that frame,we used the maximum of all values in an experiment fromdifferent frames to characterize delay in the low-delay method.In the simulations, we explored values that provided com-parable or better performance than the four subband designs S1,S2, S3, and S4. Different components of the hearing aid systemwere evaluated separately prior to evaluation of the completesystem.

For every experiment in this section, four hearing aidprofiles—mild-gently sloping loss,4 moderate-flat loss, mod-erate-steeply sloping loss, and profound-gently sloping loss—asshown in Fig. 5(a)–(d) were used. The hearing loss thresholdsacross various frequency ranges are determined at the time ofhearing aid fitting with the pure tone audiogram, commonly atfrequencies 125, 250, 500, 1000, 1500, 2000, 3000, 4000, 6000,and 8000 Hz [1]. A digital hearing aid attempts to provide theinsertion gain for a hearing aid patient for a given hearing lossprofile. The shape of the insertion gain does not necessarilyfollow the shape of the hearing aid loss profile and dependson a prescription method. The insertion gains for the hearingloss profiles used in this paper were obtained with the NAL-RPprescription [32] and are shown in Fig. 5(a)–(d).

3The group delay � ��� of a system is related to its phase response ���� as� ��� � ���������.

4In the definition of a hearing loss profile, the first word suggests the degreeof hearing loss and the second hyphenated word suggests the hearing loss shapeacross frequency [31].

Fig. 5. Hearing loss profile and insertion gain of (a) mild-gently slopinghearing loss, (b) moderately-flat hearing loss, (c) moderate-steeply slopinghearing loss, and (d) profound-gently sloping hearing loss, matched with thelow-delay SGSM.

The perceptual quality of the output speech was measuredin each simulation using the perceptual evaluation of speechquality (PESQ) measures [33]. PESQ is an objective measurethat analyzes a test speech signal after temporal alignmentwith corresponding reference signal based on psychoacousticprinciples. PESQ uses differences in the loudness spectra ofthe reference and test signal to calculate perceptual quality ofthe test signal. PESQ provides perceptual quality rating of aspeech segment between 0.5 and 4.5 and can be interpretedas follows. The highest score indicates that the speech signalcontains no audible distortions and it is virtually identical tothe clean speech segment. The PESQ scores between 0.5 and1 indicate that the distortions and residual noise in the speechsignal are very high and the segment sound unacceptablyannoying. The ratings of 4, 3, 2 can be interpreted as “goodquality,” “slightly annoying,” and “annoying,” respectively.Besides PESQ measures, the output sound quality of the com-plete system for all methods was also evaluated subjectively bynormal-hearing listeners.

PANDEY AND MATHEWS: LOW-DELAY SIGNAL PROCESSING FOR DIGITAL HEARING AIDS 705

Fig. 6. Group delay of the low-delay system for four different hearing aid gainprofiles used in the simulations.

A. Low-Delay Spectral Gain Shaping Method

In the first experiment, we demonstrate the spectral gainshaping capability of the low-delay hearing aid system andstudy the effect of the frequency dependent delay of thelow-delay system. Specifically, we simulated the colorationeffect in hearing aids due to the hearing aid signal processingand studied the effectiveness of the low-delay method to reducethe coloration effect with a psychophysical experiment. Addi-tionally, we also evaluated the low-delay method in a binaurallistening situation.

For all the experiments in this section, we used auditory pa-rameters for normal-hearing listeners because this data is wellknown and documented. Auditory parameters for hearing-im-paired listeners, on the other hand, must be measured for eachpatient individually. We employed 21 filters to model the au-ditory system based on the critical band models for a healthyhuman as described in [18]. The center frequencies and qualityfactors for the equalization filters in each parallel section of oursystem were selected as illustrated in Section III. The parameter

was set to 20 dB for all filters. This choice provided goodmagnitude match and small group delays for the hearing lossprofiles used in the experiments.

Spectral gain shaping was accomplished using theleast-squares method by minimizing the sum of errors over 63equally spaced frequencies in the operating frequency range(0–8000 Hz). The resulting insertion gains are plotted againstthe desired gains in Fig. 5. It can be seen that the spectral gainshaping achieved by the method closely matches the desiredfrequency response for all four hearing aid gain profiles. Thegroup delays associated with the low-delay SGSM in the for-ward path across frequencies are shown in Fig. 6. It can be seenthat the maximum group delay among the four hearing aidgain profiles over all frequencies is less than 2.0 ms.

While the low-delay method produces substantially smallerdelays across frequency compared to the broadband analysis-synthesis delay, the delay is variable across frequency for thelow-delay method. It is not clear if this will mitigate colorationeffects in hearing aids. We performed a psychophysical exper-iment on normal-hearing (NH) listeners to evaluate the effectof nonlinear phase in the low-delay method and broadband de-lays in the subband methods in terms of the coloration effect.The coloration effect is observed in hearing aids when the pro-cessed sound through hearing aids combines with the unpro-cessed sound at the cochlea. Degradation in the sound quality

Fig. 7. Block diagram to simulate coloring effects.

TABLE VDESCRIPTION OF COLORATION RATINGS TO THE SUBJECT

due to the coloration effect depends on the amount of the un-processed sound combining with the processed. This in turn de-pends on several factors such as the hearing aid gain profile, typeof hearing aid fitting, etc. For example, degradation in the audioquality due to the coloration effect can be observed for smallerdelay values in hearing-impaired listeners with mild hearing lossthan in hearing-impaired listeners with profound hearing loss[34], [35]. In order to get various degrees of the coloration effect,we simulated the coloration effect by adding two sounds—pro-cessed and unprocessed, where the attenuation in the unpro-cessed sound was controlled as shown in Fig. 7. The unpro-cessed signal was simply an attenuated original sound where theattenuation modeled loss in the original sound while travelingthrough bones and a hearing aid earmold. In creating the pro-cessed sound, the hearing aid gain was provided with either thelow-delay method or with the subband methods. Amplificationintroduces nonlinear delays across frequency for the low-delaymethod and broadband delays for the subband methods. Themagnitude response of the gain function in the hearing aid wascompensated with a long linear-phase filter because the subjectsfor the experiment were normal-hearing listeners. We call thisthe inverse HA gain filter. The delay due to the inverse HA gainfilter was compensated in the unprocessed sound so that onlydelays due to amplification methods under evaluation remainedin the combined signal.

The combined sounds were created from six 5-s-long speechfiles at three attenuation levels 0 dB, dB and dB.The combined sounds were presented in a random order tosix normal-hearing listeners through a pair of headphones inboth ears at a quiet place. The subjects were asked to rate thecombined sound on a scale of 1 to 5 for the coloration effectaccording to Table V. They were provided with example soundsfor various of types of degradation due to the coloration effectbefore starting the test. The subjects had access to the originalsound at all times while rating the corresponding combinedsound and were encouraged to listen to both before rating thecombined sound.

706 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 4, MAY 2011

TABLE VIRATINGS FOR COLORATION EFFECTS FOR VARIOUS METHODS AT DIFFERENT

ATTENUATION LEVELS IN THE UNPROCESSED PATH

The coloration ratings with 95% confidence intervals for thelow-delay (LD) method and the subband methods S1, S2, S3,and S4 at different attenuation levels are listed in Table VI.As expected, the coloration effect reduced as the attenuationincreased for all systems. The coloration effect was the smallestfor the subband design S1 among all the subband methodsindicating that the coloration effect increased with delay.This is in agreement with other findings [11], [12], [34], [35].Furthermore, the coloration effect for the low-delay methodwas the least for all attenuation levels. This confirms that thelow-delay method mitigates the problem of coloration effectsin hearing aids.

Based on the above experiment, it can be said that the non-linear phase with the low-delay method do not degrade audioquality in monaural listening situations or if the phase responsesare the same in both ears. However, in practice, a hearing aidpatient may wear a hearing aid in only one ear or can have bin-aural hearing aid fittings with different hearing aid gain pro-files in each ear. In these situations, the hearing aid patient willhave different phase responses if the low-delay method is em-ployed. The human auditory system uses phase difference cuesbetween two ears along with other cues to localize sound inspace [36]. Therefore, it is important to understand effect of dif-ferent phase responses in both ears on the sound quality andlocalization abilities.

We performed a psychophysical experiment to assess effectsof different phase responses in both ears due to the subbandmethod and the low-delay method. The assessment was done interms of the output sound quality and sound source localizationabilities in the binaural listening situation. For this experi-ment, stereophonic sounds were used that appeared to comefrom a certain direction in space. A directional stereophonicsound was created by filtering a monophonic sound throughhead-related transfer functions (HRTFs) for the left and rightchannel corresponding to a direction in space. The HRTFswere obtained from the MIT Media Labs HRTF set [37]. Fivesuch stereophonic sounds were created that represented soundscoming from directions (0,0), (0,45), (0,90), 0 45 and0 90 .5 Subsequently, phase responses corresponding to

different subband methods or the low-delay method for various

5The first number is the elevation angle in degrees and the second number isthe azimuth angle in degrees.

TABLE VIIUSER RATINGS IN THE BINAURAL LISTENING SITUATION

hearing aid profiles were added to simulate phase distortionsdue to these methods.

All subjects were first trained with the unprocessed stereo-phonic sounds (without additional phase response, just with theHRTFs) to localize sounds until they became comfortable. Sub-sequently, stereophonic sounds with additional phase responsescorresponding to various methods were presented randomly tosubjects through a pair of headphones binaurally in a quiet place.Six subjects participated in the experiment. The subjects wereasked to guess the location of the sound among the five locationsafter listening the stereophonic sound. They were also asked torate the sound quality according to the description in Table V.The subjects were asked to ignore the directionality when ratingthe sound quality.

Many combinations of the processing between the left andthe right channels were tested. For each case, the mean valueswith 95% confidence intervals of the percentage of correct re-sponses in the sound direction localization, the maximum errorin the sound direction localization and the output sound qualityare listed in Table VII. In Table VII, LD profile 1 stands for thephase response corresponding to the hearing aid gain profile 1for the low-delay method. Similar names have been used for theother profiles also. For the subband methods, only profile 1 wasused because the phase distortion should be the same regardlessof the hearing aid gain profile. It can be seen that the errors inlocalization for all hearing aid profiles in any combination forthe low-delay method were comparable to the error in localiza-tion with the unprocessed sound. Audio quality ratings for thelow-delay method suggest that the added phase response dueto the low-delay spectral gain shaping did not produce any un-pleasant artifacts and the overall audio quality in the binaural lis-tening situation was good. For the subband methods, the error inlocalization was small when the same subband method was usedin both ears. However, if the subband method was used in oneear, the localization errors were very large. Even for the subbandmethod S1 with the smallest delay, the localization error was as

PANDEY AND MATHEWS: LOW-DELAY SIGNAL PROCESSING FOR DIGITAL HEARING AIDS 707

TABLE VIIIMEASURES OF SPEECH ENHANCEMENT FROM NOISE SUPPRESSION

much as 180 in some cases. The audio quality was acceptablefor the subband methods S1 and S2, but the audio quality de-graded significantly for the subband methods S3 and S4 wheresubjects perceived echo in the sound.

Comparing the subjective ratings and the localization errorsof the low-delay method with the subband methods with thebroadband analysis-synthesis delay at all frequencies used inmost hearing aids [5], [6] indicates that the method of thispaper accomplishes the goal of reducing the forward path delaywithout sacrificing the ability to achieve desired gain profiles.

B. Noise Suppression

In this section, we study the noise suppression capability ofthe low-delay method and compare the results with those of sub-band designs. In order to study the effect of noise suppressiononly, we disabled the adaptive feedback cancellation in this setof simulations, i.e., , and . Threemale and three female speech segments of length 20 s were usedas input signals in this experiment. Car noise segments takenfrom the noisy speech corpus (NOIZEUS) [38] at three differentsignal-to-noise ratios were added to the input signals to createthe noisy signals. Postfilter weights were calculated accordingthe method given in the Appendix and the gain was applied withthe subband method and the low-delay spectral gain shapingmethod. Parameters for the noise suppression postfilter weightscalculations were and . PESQ measureswere obtained for the noise suppressed speech segments.

PESQ values from the noise suppressed speech segmentsfor the low-delay method and all subband designs for the fourhearing aid gain profiles are listed in Table VIII. Table VIII alsolists for the low-delay method and the AS-FB delay forthe subband designs S1, S2, S3, and S4. The AS-FB delay onlydepends on the filter-bank while group delays in the low-delaymethod depends on the hearing aid gain profile as well as post-filter weights for noise suppression. The results in Table VIIIindicate that all methods improved the output sound quality.PESQ values in Table VIII suggest that better noise suppressionwas achieved as spectral resolution increased for the subbandmethod. However, improvements became less noticeable withthe increase in spectral resolution. Perceptual quality ratings ofthe output sound for designs S3 and S4 were very similar. Theoutput sound quality with the subband design S2 was slightlypoorer than the subband designs S3 and S4 but was acceptablein most cases. This difference was more prominent for lowerinput signal-to-noise ratios. The subband design S1 yielded theworst output sound quality among all subband designs. If weconsider the AS-FB delay and the noise suppression capability

for various subband designs, the subband designs S2 and S3are more practical choices than designs S1 and S4 for a hearingaid system. This is because the subband design S4 requireslong delay with little improvement over the subband design S3,whereas the noise suppression performance for the subbanddesign S1 falls below acceptable levels in many cases evaluatedhere.

PESQ ratings in Table VIII also indicate that the low-delaymethod produced similar perceptual output speech quality as thesubband designs with high spectral resolution such as S3 andS4. Nonetheless, the low-delay method required much smallerdelays than the subband designs S3 and S4 as indicated by the

values in Table VIII. It can be concluded from these re-sults that even the maximum delay at any frequency for thelow-delay method is smaller than the broadband delays intro-duced by the conventional subband based methods while ex-hibiting noise suppression capabilities comparable to the sub-band systems with longer AS-FB delays.

C. Adaptive Feedback Cancellation

Adaptive feedback cancellation was implemented inMATLAB for the low-delay structure and the subband basedstructure. Noise suppression was disabled for this experiment toassess only the feedback cancellation capability of both struc-tures. Parameters for the frequency domain adaptive feedbackcancellation used in the low-delay structure were ,

, , , and . Parameters forthe subband-based method were and .Finally, parameters for the forward linear predictor to estimatethe whitening filter were , , ,

, , and . The broadband gainwas set to unity, i.e., .

The input signals to the hearing aid were six clean speechwaveforms of length 80 seconds taken from the TIMIT database.Colored noise samples, with the power spectral density reducingat the rate of 3 dB per octave as the frequency increases, wereadded to simulate a noisy signal with 40-dB signal-to-noiseratio. This noise model was chosen to represent the hardwarenoise due to circuits/sensors in the hearing aid. The initial valueof the hearing aid gain was set to 20 dB below the target gainat all frequencies. Subsequently, the gain was slowly increasedfor 20 s at the rate of 1 dB/s to reach the target gain level. Theadaptive feedback cancellation experiment ran for 80 s in eachcase. The last 10 s of the experiment were deemed as the steadystate. PESQ measures from the steady state were used to judgeperformance of adaptive feedback cancellation capability for allmethods in this section.

708 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 4, MAY 2011

Fig. 8. Misalignment (dB) as function of forward path delays for the subbandand low-delay methods.

Fig. 8 shows the PESQ values for all methods consideredas a function of the processing delay in the system. The for-ward path delays were adjusted with parameters for thelow-delay method and for the subband method in additionto the minimum delay produced by these methods for spectralgain shaping. Fig. 8 shows that all subband designs yielded sim-ilar perceptual output sound quality for all profiles irrespectiveof the forward path delay values. Increase in the forward pathdelay did not increase the perceptual quality of the output sound.The output sound quality for profiles 1 to 3 was very good forall subband designs. On the other hand, the output sound qualityfor profile 4 was degraded for all subband designs. This is be-cause, the strength of the residual feedback components in theprocessed signal is larger for profile 4 than that for other profilesdue to its higher gain requirements.

In case of the low-delay method, the output sound quality wasunacceptable if the broadband delay was set to 0 as shown bythe low PESQ values for the first data point in the low-delaymethod graph of Fig. 8 for all profiles. The low-delay structurerequired at least 1 ms of the broadband delay for profiles 1 to 3and at least 2 ms of delay for profile 4 to perform satisfactoryadaptive feedback cancellation with good output sound quality.With these broadband delay values, total maximum processingdelay for the low-delay method during these experimentswas less than 3 ms. We used the minimum acceptable broadbanddelay values for the low-delay method obtained from the simula-tions in this subsection for evaluating the complete system in thenext section. We used no additional broadband delay in the sub-band designs for the complete system evaluation. Without addi-tional delay in the subband designs, the overall delay is equal tothe AS-FB delay.

D. Evaluation of the Complete System

In earlier sections, we evaluated the individual componentsof the hearing aid system. This process also helped us iden-tify parameter values that resulted in acceptable performance

in practical situations for these components. In this section, weevaluate the complete system that includes amplification, adap-tive feedback cancellation and dynamic signal processing. Six80-s-long input speech signals described in the previous sec-tion were used in this experiment. Additionally, car noise wasadded to the speech signals to create signal-to-noise of 10 dB.The output sound quality in the steady state was subjectivelyevaluated for all methods. The perceived output sound qualityof the hearing aid system for both structures was assessed withnormal-hearing listeners as well as with the PESQ measure. Inorder to work with normal-hearing listeners, the output of thelow-delay system was further processed with long linear-phasefilters that equalized for the effects of the insertion gain of thehearing aid.

To assess sound quality, six subjects participated in thelistening test. The processed sounds were presented to them inboth ears with a pair of headphones in a quiet place. Subjectsrated each system under test on a scale from 1 to 5, where 5corresponded to listening directly to clean speech, and 1 cor-responded to a signal of such poor quality that it was deemedunacceptable. Specifically, a rating of 1 indicates continuousloud artifacts, a rating of 2 indicates continuous low-levelartifacts, 3 indicates intermittent low-level noticeable artifacts,and ratings of 4–5 indicate very good to excellent speechquality. Artifacts were explained to subjects as degradations inthe speech signals. The output speech signals suffered from avariety of degradations such as the presence of residual back-ground noise, ringing due to residual feedback, and waterfallsounds.6 Roughly speaking, ratings in the range 0–2 indicateunacceptable quality, and ratings of 3–4 indicate tolerabledegradations. Subjects could switch between the input and theprocessed output for a direct comparison to “perfect” quality.Before the subjects started the experiment, they were providedwith an example of the clean sound as well as example soundsof various types of artifacts for listening. The subjects alwayshad access to the clean audio while rating processed sounds.

The total maximum forward path delay at any time duringan experiment was used to characterize the delay in the system.In case of the low-delay method, the maximum forward pathdelay is the addition of the broadband delay and requiredfor dynamic signal processing. On the other hand, the maximumforward path delay for the subband method is the addition of thebroadband delay and the AS-FB delay.

The mean values with 95% confidence intervals of the userratings, PESQ values and processing delays for all methodsare listed in Table IX. PESQ values reported here exhibit sim-ilar trends as the user ratings for all hearing aid profiles andmethods. This validates the use of PESQ values for earlier ex-periments and for the selection of the parameters of the hearingaids algorithms. Results in Table IX indicate that the subbanddesigns S2, S3, and S4 yielded good output sound quality forprofiles 1 to 3. The output sound quality with the subbanddesigns S3 and S4 was rated better than with the subband de-sign S2 for the above profiles. Output sound quality ratings forthe subband design S1 indicated more artifacts than any other

6An artifact due to short term spectral changes in noise suppression systems.It is also sometimes referred to as swirling.

PANDEY AND MATHEWS: LOW-DELAY SIGNAL PROCESSING FOR DIGITAL HEARING AIDS 709

TABLE IXOUTPUT SOUND QUALITY FOR THE COMBINED SYSTEM FOR BOTH METHODS

method for profiles 1 to 3. This is mostly due to the lower spec-tral resolution in the subband design S1. The low-delay methodalso yielded good output sound quality that was comparable tothe subband designs S2, S3, and S4. However, the low-delaymethod resulted in lower delay values compared to subbanddesigns S2, S3, and S4. The processing delay for the sub-band design S1 is comparable to that of the low-delay method.However, the subband design S1 produced considerably moreartifacts in the output than compared to the low-delay method.For profile 4, the output sound quality was more degraded com-pared to other profiles for all methods. This is because profile 4has higher insertion gain that leads to additional residual acous-tical feedback components in the output sound. The hearing aidsignal processing is limited in suppressing the acoustic feed-back components to provide a better output sound quality forthe feedback path used in this paper. A hearing aid that hashigher critical gain than the feedback path used in this papermay provide better output sound quality for profile 4. Audi-ologist often attempt to alter the feedback path response toimprove audio quality for patients who require high levels ofamplification [31]. This discussion is out of the scope of thispaper. it can be said that the low-delay method performed asgood as subband designs with high spectral resolutions withmuch lower processing delays.

V. CONCLUSION

In this paper, we presented a structure for signal processing inhearing aids that reduces broadband delays in the forward path.The key element of the low-delay structure is the spectral gainshaping method that utilizes parametric equalization filters.Results of extensive performance evaluation presented in thepaper reveal that the low-delay signal processing method per-forms as effectively as a state-of-the-art subband-based methodwith much smaller forward path delays. A comparison of ourimplementations of this system has indicated that the low-delaystructure has comparable or lower computational complexitythan our subband realizations. This work paves the way foran alternative solution to the widely used subband methodfor implementing hearing aid signal processing algorithmsand reducing the unwanted effects of the long and broadbandforward path delays.

APPENDIX

In this appendix, we briefly describe a method to determinethe postfilter weights [15] to reduce noise in a bandof frequencies centered around at time for audio signal

. These weights are obtained from the instantaneousspectral power . Subband componentsare used to obtain instantaneous spectral power in each bandwhereas off-the-forward-path discrete Fourier transform (DFT)is used to obtain spectral power to generate the postfilterweights for the low-delay method. Given the instantaneousspectral power in the band, the postfilter weight that repre-sents the amount of suppression for that band is calculated as

(10)

where denotes estimate of the a priori signal-to-noiseratio and is calculated using a decision-directed approach as

(11)

where , is the a posteriori estimate of thesignal-to-noise ratio defined as

(12)

is an estimate of the power of the clean audio inthe last time frame given by

(13)

and is an estimate of the background noise powerestimated as

(14)

where is a constant such that .

REFERENCES

[1] S. Wyrsch and A. Kaelin, “Subband signal processing for hearing aids,”in Proc. IEEE Int. Symp. Circuits Syst., Orlando, FL, Jul. 1999, vol. 3,pp. 29–32, 3.

[2] K. M. Kates and K. H. Arehart, “Multichannel dynamic-range com-pression using digital frequency warping,” EURASIP J. Appl. SignalProcess., vol. 18, no. 1, pp. 3003–3014, Jan. 2005.

[3] M. G. Siqueira and A. Alwan, “Steady-state analysis of continous adap-tation in acoustic feedback reduction systems for hearing-aids,” IEEETrans. Speech Audio Process., vol. 8, no. 4, pp. 443–453, Jul. 2000.

[4] H. F. Chi, S. X. Gao, S. D. Soli, and A. Alwan, “Band-limited feed-back cancellation with a modified filtered-X LMS algorithm for hearingaids,” Speech Commun., vol. 39, no. 1, pp. 147–161, Jan. 2003.

[5] R. Brennan and T. Schneider, “A flexible filterbank structure for ex-tensive signal manipulations in digital hearing aids,” in Proc. IEEE Int.Symp. Circuits Syst., Monterey, CA, Jun. 1998, vol. 6, pp. 569–572.

[6] B. Farhang-Boroujeny, Adaptive Filters: Theory and Applications.New York: Wiley, 1998.

[7] M. Harteneck, S. Weiss, and R. W. Stewart, “Design of near perfectreconstruction oversampled filter banks for subband adaptive filters,”IEEE Trans. Circuits Syst., vol. 46, no. 8, pp. 1081–1086, Aug. 1999.

[8] M. B. Sachs, I. C. Bruce, R. L. Miller, and E. D. Young, “Biologicalbasis of hearing-aid design,” Ann. Biomed. Eng., vol. 30, no. 2, pp.157–168, Feb. 2002.

710 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 4, MAY 2011

[9] J. Ryan and S. Tewari, “A digital signal processor for musicians andaudiophiles,” Hear. Rev., vol. 16, no. 2, pp. 38–41, 2009.

[10] J. Agnew and J. M. Thornton, “Just noticeable and objectionable groupdelays in digital hearing aids,” J. Amer. Acad. Audiol., vol. 11, no. 6,pp. 330–336, Jun. 2000.

[11] M. A. Stone and B. C. J. Moore, “Tolerable hearing aids delays. I: Es-timation of limits imposed by the auditory path alone using simulatedhearing losses,” Ear Hear., vol. 20, no. 3, pp. 182–192, 1999.

[12] M. A. Stone and B. C. J. Moore, “Tolerable hearing aids delays. II: Es-timation of limits imposed during speech production,” J. Amer. Acad.Audiol., vol. 11, no. 6, pp. 325–338, Aug. 2002.

[13] S. A. White, “Design of a digital biquadratic peaking or notch filterfor digital audio equalization,” J. Audio Eng. Soc., vol. 34, no. 6, pp.479–483, Jun. 1986.

[14] S. J. Orfanidis, “High-order digital parametric equalizer design,” J.Audio Eng. Soc., vol. 53, no. 11, pp. 1026–1046, Nov. 2005.

[15] E. Hansler and G. Schmidt, Acoustic Echo and Noise Control: A Prac-tical Approach, first ed. Hoboken, NJ: Wiley, 2004.

[16] G. Ramos, J. J. Lopez, and J. Lloret, “Direct method with random opti-mization for loudspeaker equalization using IIR parametric filters,” inProc. Int. Conf. Acoust. Signal Syst. Process., Montreal, QC, Canada,May 2004, vol. 5, pp. 97–100, 10.

[17] D. Wit, P. Robert, A. J. Kaizer, O. D. Beek, and J. Frans, “Numericaloptimization of the crossover filters in a multiway loudspeaker system,”J. Audio Eng. Soc., vol. 34, no. 3, pp. 115–123, Mar. 1986.

[18] W. A. Yost and D. W. Nielsen, Fundamentals of Hearing, 2nd ed.Hillsdale, NJ: Holt Rinehart and Winston, 1968.

[19] D. R. Morgan and J. C. Thi, “A delayless subband adaptive filter struc-ture,” IEEE Trans. Signal Process., vol. 43, no. 8, pp. 1819–1830, Aug.1995.

[20] J. Hellgren, “Analysis of feedback cancellation in hearing aids withfiltered-X LMS and the direct method of closed loop identification,”IEEE Trans. Speech Audio Process., vol. 10, no. 2, pp. 119–131, Feb.2002.

[21] S. Weiss, A. Stenger, R. W. Stewart, and R. Rabenstein, “Steady-stateperformance limitations of subband adaptive filters,” IEEE Trans.Signal Process., vol. 49, no. 9, pp. 1982–1991, Sep. 2001.

[22] R. E. Crochiere and L. R. Rabiner, Multirate Digital Signal Pro-cessing. Upper Saddle River, NJ: Prentice-Hall, 1996.

[23] R. Dong, D. Hermann, R. Brennan, and E. Chau, “Joint filterbank struc-tures for integrating audio coding into hearing aid applications,” inProc. IEEE Int. Conf. Acoust., Speech, Signal Process., Las Vegas, NV,Apr. 2007, pp. 1533–1536.

[24] Y. Ephraim and D. Malah, “Speech enhancement using a minimummean-square error log-spectral amplitude estimator,” IEEE Trans.Acoust., Speech, Signal Process., vol. ASSP-33, no. 2, pp. 443–445,Apr. 1985.

[25] A. V. Oppenheim and R. W. Schafer, Digital Signal Processing. En-glewood Cliffs, NJ: Prentice-Hall, 1975.

[26] N. Karaboga and B. Cetinkaya, “Design of minimum phase digital IIRfilters by using genetic algorithm,” in Proc. 6th Nordic Signal Process.Symp., Espoo, Finland, Jun. 9–11, 2004, pp. 29–32.

[27] P. A. Regalia and S. K. Mitra, “Tunable digital frequency responseequalization filters,” IEEE Trans. Acoust., Speech, Signal Process., vol.ASSP-35, no. 1, pp. 118–120, Jan. 1987.

[28] A. Marques and D. Freitas, “Infinite impulse response (IIR) inversefilter design for the equalization of non-minimum phase loudspeakersystems,” in Proc. IEEE Workshop Applicat. Signal Process. AudioAcoust., New Paltz, NY, Oct. 16–19, 2005, pp. 170–173.

[29] A. Spriet, I. Proudler, M. Moonen, and J. Wouters, “Adaptive feedbackcancellation in hearing aids with linear prediction of the desired signal,”IEEE Trans. Signal Process., vol. 53, no. 10, pp. 3749–3763, Oct. 2005.

[30] B. D. Kovacevic, M. M. Milosavljevic, and M. Dj. Veinovic, “Time-varying ar speech analysis using robust RLS algorithm with variableforgetting factor,” in Proc. Int. Assoc. Pattern Recognition, Jerusalem,Israel, Oct. 1994, vol. 3, pp. 211–213.

[31] H. Dillon, Hearing Aids, 1st ed. New York: Thieme Medical, 2001.[32] D. Bryce and W. Tonisson, “Selecting the gain of hearing aids for per-

sons with sensori-neural hearing impairments,” Scand. Audiol., vol. 5,no. 1, pp. 51–59, 1976.

[33] “Perceptual evaluation of speech quality (PESQ): An objective methodfor end-to-end speech quality assessment of narrow band telephone net-works and speech coders,” 2001, ITU-T Rec. P.862.

[34] M. A. Stone and B. C. J. Moore, “Tolerable hearing aids delays. III:Effects on speech production and perception of across frequency vari-ation in delay,” Ear Hear., vol. 24, no. 2, pp. 175–183, 2003.

[35] M. A. Stone and B. C. J. Moore, “Tolerable hearing aids delays. IV: Es-timation of limits imposed by the auditory path alone using simulatedhearing losses,” Ear Hear., vol. 20, no. 3, pp. 182–192, 1999.

[36] D. Byrne and W. Noble, “Optimizing sound localization with hearingaids,” Trends Amplif., vol. 3, no. 2, pp. 49–73, 1998.

[37] B. Gardner and K. Martin, “HRTF measurements of a Kemar dummy-head microphone,” 1994 [Online]. Available: http://sound.media.mit.edu/KEMAR.html

[38] Y. Hu and P. Loizou, “Subjective evaluation and comparison of speechenhancement algorithms,” Speech Commun., vol. 49, pp. 588–601,2007.

Ashutosh Pandey received the B.S. degree in elec-tronics and communications engineering from the In-stitute of Technology-Banaras Hindu University (IT-BHU), Varanasi, India, and the M.S. degree in elec-trical and computer engineering from the Universityof Utah, UT.

Currently, he works as a DSP Research Engineerfor ClearOne Communications, Salt Lake City,UT. His research interests are in applications ofsignal processing algorithms for audio conferencingproducts and hearing aids.

V. John Mathews (S’82–M’84–SM’90–F’02)received the B.E. (Honors) degree in electronics andcommunication engineering from the University ofMadras, Madras, India, in 1980 and the M.S. andPh.D. degrees in electrical and computer engineeringfrom the University of Iowa, Iowa City, in 1981 and1984, respectively

He is a Professor of Electrical and Computer Engi-neering at the University of Utah. At the University ofIowa, he was a Teaching/Research Fellow from 1980to 1984 and a Visiting Assistant Professor in the De-

partment of Electrical and Computer Engineering during the 1984–1985 aca-demic year. He joined the University of Utah in 1985, where he is engaged inteaching signal processing classes and conducting research in signal processingalgorithms. He served as the Chairman of the Electrical and Computer Engi-neering department from 1999 to 2003. His current research interests are innonlinear and adaptive signal processing and application of signal processingtechniques in audio and communication systems, biomedical engineering, andstructural health management. He has also contributed in the areas of percep-tually tuned image compression and spectrum estimation techniques. He is theauthor of the book Polynomial Signal Processing (Wiley, 2000) coauthored withProf. G. L. Sicuranza, University of Trieste, Trieste, Italy. He has publishedmore than 125 technical papers.

Dr. Mathews has served as a member of the Signal Processing Theory andMethods Technical Committee, the Education Committee, and the ConferenceBoard of the IEEE Signal Processing Society. He was the Vice President (Fi-nance) of the IEEE Signal Processing Society during 2003–2005, and is nowserving a three-year term as the Vice President (Conferences) of the Society. Heis a past Associate Editor of the IEEE TRANSACTIONS ON SIGNAL PROCESSING

and the IEEE SIGNAL PROCESSING LETTERS, and has served on the editorialboard of the IEEE Signal Processing Magazine. He currently serves on the edi-torial board of the IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING.He has served on the organization committees of several international technicalconferences including as General Chairman of IEEE International Conferenceon Acoustics, Speech, and Signal Processing (ICASSP) 2001. He is a recipientof the 2008–2009 Distinguished Alumni Award from the National Institute ofTechnology, Tiruchirappalli, India.