Digital signal processing (DSP) applications for multiband ...

15
iyot .. , \ Department of Veterans Affairs Journal of Rehabilitation Research and Development Vol . 30 No . 1, 1993 Pages 95–109 Digital signal processing (DSP) applications for multiband loudness correction digital hearing aids and cochlear implants Norbert Dillier, PhD ; Thomas Frolich, DiplingETH ; Martin Kompis, DiplIngETH; Hans Bogli, DiplingETH ; Waikong K . Lai, PhD Department of Otorhinolaryngology, University Hospital, CH-8091 Zurich, Switzerland Abstract—Single-chip digital signal processors (DSPs) allow the flexible implementation of a large variety of speech analysis, synthesis, and processing algorithms for the hearing impaired . A series of experiments was carried out to optimize parameters of the adaptive beamformer noise reduction algorithm and to evaluate its performance in realistic environments with normal-hearing and hear- ing-impaired subjects . An experimental DSP system has been used to implement a multiband loudness correction (MLC) algorithm for a digital hearing aid . Speech tests in quiet and noise with 13 users of conventional hearing aids demonstrated significant improvements in discrimination scores with the MLC algorithm . Various speech coding strategies for cochlear implants were implemented in real time on a DSP laboratory speech processor . Improved speech discrimination performance was achieved with high-rate stimulation . Hybrid strategies incorporating speech feature detectors and complex decision algorithms are currently being investigated. Key words : digital hearing aids, experimental digital signal processor, multiband loudness correction, noise reduction algorithm, speech coding strategies for cochlear implants. INTRODUCTION New generations of fast and flexible single-chip digital signal processors (DSPs) allow the implemen- Address all correspondence and requests for reprints to : N . Dillier, Dept . of Otorhinolaryngology, University Hospital, CH-8091 Zurich, Switzerland . tation of sophisticated and complex algorithms in real time which required large and expensive hard- ware only a few years ago . Signal processing algorithms such as nonlinear multiband loudness correction, speech feature contrast enhancement, adaptive noise reduction, speech encoding for cochlear implants, and many more offer new oppor- tunities for the hard-of-hearing and the profoundly deaf . A recent review report on the status of speech- perception aids for hearing-impaired people came to the final conclusion that major improvements are within our grasp and that the next decade may yield aids that are substantially more effective than those now available (1). This paper concentrates on three areas of DSP applications for the hearing impaired and investi- gates algorithms and procedures which have the potential of being effectively utilized and integrated into existing concepts of rehabilitation of auditory deficits in the near future . The first area concerns the problem of interfering noise and its reduction through an adaptive filter algorithm . The second area deals with the reduced dynamic range and distorted loudness perception of sensorineurally deaf persons and its compensation through a multiband loudness correction algorithm . The third area covers new processing strategies for profoundly deaf per- sons with cochlear implants . While the first DSP application does not rely on the individual hearing loss data of a subject, and thus could serve as a general preprocessing stage for any kind of auditory 95

Transcript of Digital signal processing (DSP) applications for multiband ...

Page 1: Digital signal processing (DSP) applications for multiband ...

iyot..,\ Department ofVeterans Affairs

Journal of Rehabilitation Researchand Development Vol . 30 No . 1, 1993Pages 95–109

Digital signal processing (DSP) applications for multibandloudness correction digital hearing aids and cochlear implants

Norbert Dillier, PhD; Thomas Frolich, DiplingETH; Martin Kompis, DiplIngETH;Hans Bogli, DiplingETH ; Waikong K . Lai, PhDDepartment of Otorhinolaryngology, University Hospital, CH-8091 Zurich, Switzerland

Abstract—Single-chip digital signal processors (DSPs)allow the flexible implementation of a large variety ofspeech analysis, synthesis, and processing algorithms forthe hearing impaired . A series of experiments was carriedout to optimize parameters of the adaptive beamformernoise reduction algorithm and to evaluate its performancein realistic environments with normal-hearing and hear-ing-impaired subjects . An experimental DSP system hasbeen used to implement a multiband loudness correction(MLC) algorithm for a digital hearing aid . Speech tests inquiet and noise with 13 users of conventional hearing aidsdemonstrated significant improvements in discriminationscores with the MLC algorithm . Various speech codingstrategies for cochlear implants were implemented in realtime on a DSP laboratory speech processor . Improvedspeech discrimination performance was achieved withhigh-rate stimulation. Hybrid strategies incorporatingspeech feature detectors and complex decision algorithmsare currently being investigated.

Key words : digital hearing aids, experimental digitalsignal processor, multiband loudness correction, noisereduction algorithm, speech coding strategies for cochlearimplants.

INTRODUCTION

New generations of fast and flexible single-chipdigital signal processors (DSPs) allow the implemen-

Address all correspondence and requests for reprints to : N. Dillier,Dept . of Otorhinolaryngology, University Hospital, CH-8091 Zurich,Switzerland .

tation of sophisticated and complex algorithms inreal time which required large and expensive hard-ware only a few years ago . Signal processingalgorithms such as nonlinear multiband loudnesscorrection, speech feature contrast enhancement,adaptive noise reduction, speech encoding forcochlear implants, and many more offer new oppor-tunities for the hard-of-hearing and the profoundlydeaf. A recent review report on the status of speech-perception aids for hearing-impaired people came tothe final conclusion that major improvements arewithin our grasp and that the next decade may yieldaids that are substantially more effective than thosenow available (1).

This paper concentrates on three areas of DSPapplications for the hearing impaired and investi-gates algorithms and procedures which have thepotential of being effectively utilized and integratedinto existing concepts of rehabilitation of auditorydeficits in the near future . The first area concernsthe problem of interfering noise and its reductionthrough an adaptive filter algorithm. The secondarea deals with the reduced dynamic range anddistorted loudness perception of sensorineurally deafpersons and its compensation through a multibandloudness correction algorithm . The third area coversnew processing strategies for profoundly deaf per-sons with cochlear implants . While the first DSPapplication does not rely on the individual hearingloss data of a subject, and thus could serve as ageneral preprocessing stage for any kind of auditory

95

Page 2: Digital signal processing (DSP) applications for multiband ...

96

Journal of Rehabilitation Research and Development Vol . 30 No. 1 1993

prosthesis, the second and third applications areintimately related to psychophysical measurementsand individual fitting procedures . In the case ofcochlear implant processing strategies, a number oftechnical and physiological constraints have to beconsidered . Although these three areas could beconsidered as three different projects that wouldneed to be treated separately in more detail, thereare many common aspects of signal processing andexperimental evaluation . The use of similar toolsand methods has been beneficial and fruitfulthroughout the theoretical and practical develop-ment of the respective projects . The complexity andinterdisciplinary nature of the current auditoryprostheses research problems requires collaborationand teamwork which, it is hoped, will lead tofurther practical solutions.

EVALUATION OF THE ADAPTIVEBEAMFORMER NOISE REDUCTION SCHEMEFOR HEARING-IMPAIRED SUBJECTS

A major problem for hearing aid users is thereduced intelligibility of speech in noise . While thehearing aid is a great relief for conversations inquiet, its usefulness is often drastically reducedwhen background noise, especially from otherspeakers, is present. Several single-microphone noisereduction schemes have been proposed in the pastwhich can improve sound quality but often failed toimprove intelligibility, when tested with a normal-hearing or hearing-impaired subject in real lifesituations such as competing speakers as noisesources (2,3,4,5,6).

Multimicrophone noise reduction schemes, suchas the adaptive beamformer (7,8,9), make use ofdirectional information and are potentially veryefficient in separating a desired target signal fromintervening noise. The adaptive beamformer en-hances acoustical signals emitted by sources comingfrom one direction (e .g ., in front of a listener) whilesuppressing noise coming from other directions.Thus, the effect is similar to the use of directionalmicrophones or microphone arrays . Because of theadaptive postprocessing of the microphone signals,the adaptive beamformer is able to combine the highdirectivity of microphone arrays with the conve-nience of using only two microphones placed in orjust above the ears .

The aim of our investigation was to optimizethe algorithm for realistic environments, implementa real-time version, and estimate its usefulness forfuture hearing aid applications.

MethodParameter Optimization . Based on the work of

Peterson, et al . (8), a Griffiths-Jim beamformer waschosen with two microphones placed at each ear . Inthe first stage, the sum and difference of the twomicrophone signals were calculated . The formercontained mainly the desired signal, the lattermainly noise . A self-adaptive filter then tried tocancel out as much of the remaining noise in thedesired signal path as possible.

While the system works quite well in anechoicrooms, its performance is not very satisfactory inreverberant conditions, because a part of the desiredsignal will also come from other than the frontdirection. In that case, the adaptive beamformer willmistake the desired signal for noise and try to cancelit as well . Therefore, it is necessary to detect thepresence of a target signal and to stop the adapta-tion of the filter during these intervals (10,11) . Themethod of target signal detection implemented inour system is based on the comparison of thevariance of the sum (Sigma, E) and difference(Delta, 0) signals which are easily obtained bysquaring and lowpass-filtering the frontend signalsof the Griffiths-Jim beamformer . An optimalthreshold value of 0 .6 (E/[E + 0]) was determinedempirically through variations of the noise signal,filter parameters, and room acoustics. Other proce-dures have been considered as well but were eitherless efficient or computationally too expensive . Wefound that the performance of the adaptivebeamformer can be notably increased by optimizingthe adaptation inhibition.

Figure 1 shows measurements of the directivitycharacteristic with and without adaptation inhibitionin a moderately reverberant room (RT = 0 .4 sec).Because the microphones at the ears of a dummyhead were omnidirectional, the beam not onlypointed to the front (0°), but also to the back (180°)of the listener . It can be seen that the adaptationinhibition increased the difference between twosources located at an angle of 0° and 90° fromaround 6 dB to approximately 10 dB . The sharpprofile of the beam with the adaptation inhibitionpresent, when compared with the rather smeared

Page 3: Digital signal processing (DSP) applications for multiband ...

97

Section

Digital Techniques Applied to Related Areas : Miller et al.

Directivity pattern120°

90°

60°

150°

180°

-150°

Figure 1.Effect of speech detection and adaptation inhibition on thedirectivity pattern of the adaptive beamformer . Front direction(nose) : 0°, left ear : 90°.

pattern without adaptation inhibition, is a result ofthe target signal detection scheme which not onlyenhances the directivity of the adaptive beamformer,but is also responsible for the shape and openingangle of the beam.

The adaptive beamformer depends on a greatvariety of acoustic and design parameters. Toanalyze the sensitivity of these parameters, theoreti-cal as well as experimental investigations werecarried out based on adaptive filter and roomacoustics theories . A computer program was writtenwhich prompts the user for room acoustic anddesign parameters, as well as directivity and positionof a target and a noise source . The program thenpredicts the improvement in signal-to-noise ratio(SNR) which can be reached with a perfectlyadapted beamformer . The predictions were experi-mentally verified . It is beyond the scope of thispaper to discuss the theoretical analysis in detail orto describe the influence of all parameters investi-gated.

Three parameters were found to influence theperformance of the adaptive beamformer most : thereverberation time of the room, the length of theadaptive filter, and the amount of delay in thedesired signal path of the beamformer (12).

Figure 2 shows measurement data from anexperimental setup, where white noise was generatedby two loudspeakers located at a distance of 1 m

from a dummy head . The desired signal source wasplaced at 0°, the noise source at 45° to the right ofthe head. Thus, the two microphone signals pickedup SNRs as indicated in Figure 2 by squares(comparing the output of the beamformer with thesignal of the microphone which is directly irradiatedby the noise source) and triangles (comparing theoutput with the contralateral microphone signal).

The variation of reverberation time (Figure 2,left) was achieved by performing the measurementsin a number of different rooms whose acousticcharacteristics had been determined . It can be seenthat a moderate amount of reverberation reduces theeffectiveness of the beamformer to a few decibels.

The reverberation time of rooms where a noisereduction would be most advantageous can usuallynot be influenced . The design parameters of theadaptive beamformer however can be optimized.The most important of them is the length of theadaptive filter . When varying this parameter, it canbe seen (Figure 2, right) that very short filters areable to switch to the microphone signal with thebetter SNR, while filters of at least about 500 taps inlength are required to reach an additional 4 dB.

A computationally inexpensive way to optimizethe adaptive beamformer is the selection of anoptimal delay in the desired signal path . We foundthat for longer filters such as a 512 tap filter, thechoice of a reasonable delay is important. It wasalso found that the optimum depends on the actualacoustical setup and will lie between 25 percent and50 percent of the filter length.

Implementation . A real-time version of theadaptive beamformer was implemented on a PC-based, floating point digital signal processor(TMS320C30, Loughbourough Sound Images Ltd .).For the update of the adaptive filter, a least meansquares (LMS) adaptation algorithm (13) was em-ployed. After evaluating several different adaptationalgorithms, we found that this widely used andwell-understood algorithm is still the best choice forour hearing aid application . An adaptation inhibi-tion, as discussed above, was implemented.

With this system, filters of up to 500 taps inlength at a sampling rate of 10,000 per second canbe realized . For our implementation, a high-perfor-mance DSP had to be used whose battery powerrequirements are still excessive for a commercialhearing aid. The system is, however, flexible enoughto allow an evaluation of the main aspects of the

-120°

-90°

-60°

withoutadaptationinhibition

withadaptationinhibition

Page 4: Digital signal processing (DSP) applications for multiband ...

98

Journal of Rehabilitation Research and Development Vol . 30 No . 1 1993

SNR Improvement [dB] SNR Improvement [dB]10

4

2

I

I

I

I

I

0.0

0 .2

0 .4

0 .6

0 .8

reverberation time [s]

11111132

64

128 256 512 1024

filter length [taps]Figure 2.SNR improvement as function of reverberation time (left) and filter length (right).

algorithm with the potential of future miniaturiza-tion .

Evaluation Experiments . To ensure that theadaptive beamformer really is a useful algorithm forhearing aid users, intelligibility tests were carried outwith normal-hearing and hearing-impaired subjects.Speech test items were presented by a computer andconsisted of two-syllable words with either themedial vowels or consonants forming phonologicalminimal pairs . Four response alternatives weredisplayed on a touch-sensitive computer displayfrom which the subjects had to select a response bypointing to it with a finger . No feedback and norepetitions of test items were provided . Fifty or onehundred words per condition were presented and thenumber of correctly identified items were correctedfor the chance level . Phoneme confusion matricescould be generated for further analysis of transmit-ted information (14).

Most of the test material was recorded in a testroom with an average reverberation time of 0 .4 sec.Speech was presented via a frontal loudspeaker at 70

dB sound pressure level (SPL) . Speech-spectrum-shaped noise which was amplitude-modulated at arandom rate of about 4 Hz was presented 45° to theright at different SNRs . A dummy head with twomicrophones located in the left and right conchaswas used to pick up sounds . The distance betweenthe dummy head and either of the two loudspeakerswas 1 m. In the experiments with normal-hearingsubjects, the sounds were presented directly viaearphones monaurally or binaurally (unprocessedconditions) or binaurally after modification by theadaptive beamformer (processed condition) . In theexperiments with the hearing aid users, the unproc-essed or processed sounds were presented via aloudspeaker in a sound-treated room.

This setup was thought to represent someimportant aspects of everyday listening situations : 1)the size and reverberation time of the room is typicalfor offices and living rooms in our surrounding; 2) afair amount of head shadow is provided by thedummy head; and/or, 3) because of the integratedadaptation inhibition, the adaptation of the

Page 5: Digital signal processing (DSP) applications for multiband ...

99

Section III . Digital Techniques Applied to Related Areas : Miller et al,

Hearing Impaired Subjects (n=6)Speech Spectrum Noise

100% correct responses (CL-corrected)

(Omonaural right Mmonaural left Ilbeamformer)

80 -

Consonants

Vowels

Normal Hearing Subjects (n=9)

100% correct responses (CL-corrected)

(Omonaural right ®monaural left CZ binaural •beamformer)

80

-15

-10

-5

SNR (dB)

-15

-10

-5Consonants

Vowels

Figure 3.Evaluation of the adaptive beamformer with normal hearingsubjects (n = 9).

beamformer did not have to be performed beforethe experiments were started . To include even morerealistic situations such as moving and multiplesound sources, a part of the investigation wasperformed with stereophonic recordings of cafeterianoise.

Intelligibillity tests were first performed withnine normal-hearing subjects. As can be seen inFigure 3, intelligibility with the adaptive beam-former is highest compared with either one of themicrophone signals or even with the binaural listen-ing situation. This is true for all SNRs tested andconsonant as well as vowel tests, the largest increasein intelligibility being achieved at low SNRs . Mostof these differences were statistically significant.

The noise reduction scheme is primarily meantto be useful to hearing-impaired persons, and so itwas also tested with six hearing-impaired volunteers,all of whom were regular users of conventionalhearing aids (Figure 4a) . As for the normal-hearingsubjects, the adaptive beamformer increased intelli-gibility, when compared with either one of the twounprocessed microphone signals.

The ability of the beamformer to cope withacoustically complex situations and several compet-ing speakers as noise sources is documented by theincreased intelligibility in the cafeteria setup (Figure4b) . Note, however, that the differences between thetwo monaural conditions (directly irradiated andcontralateral ear) could not be seen anymore, due tomultiple sound sources from all directions . At 0 dBSNR, the beamformer did not provide any benefitfor consonant recognition, which is probably due toerroneous behavior of the speech-detector algorithm

Hearing Impaired Subjects (n=6)Cafeteria Noise

% correct responses (CL-corrected)

Omonaural right ®monaural left Nbeamformer

Figure 4.Evaluation results of hearing-impaired subjects listening withtheir own hearing aid either to the unprocessed right or left earsignal or to the output of the adaptive beamformer . a) Speechspectrum shaped noise . b) With cafeteria noise.

in some conditions . There was, however, still animprovement for vowel recognition which was statis-tically significant.

ConclusionsFrom our investigation, we conclude that three

conditions are important for the design of anadaptive beamformer noise reduction for hearingaids: 1) an adaptation inhibition has to be provided;2) filters of no less than approximately 500 tapsshould be used; and, 3) the delay should be set to areasonable value, preferably somewhere between 25and 50 percent of the filter length . When theserequirements are met, the adaptive beamformer isable to improve intelligibility for normal-hearing aswell as for hearing-impaired subjects, even in roomswith a realistic amount of reverberation and incomplex acoustic situations, such as a cafeteria.

80

-5

0

SNR (dB)

Consonants Vowels

100

Page 6: Digital signal processing (DSP) applications for multiband ...

100

Journal of Rehabilitation Research and Development Vol . 30 No. 1 1993

DIGITAL MULTIBAND LOUDNESSCORRECTION HEARING AID

In this section, an implementation of a newalgorithm for a digital hearing aid is described whichattempts to correct the loudness perception ofhearing-impaired persons. A variety of multibandsignal processing algorithms have been proposedand tested in the past (15,16,17,18,19,20,21,22).Some have failed, others showed large improve-ments in speech recognition scores . The aim of ourstudy was the real-time implementation of an algo-rithm which restores normal loudness perceptionsimilar to the concept suggested by Villchur (23) andothers.

A new aspect of our implementation concernsthe close relation between psychoacoustic measure-ments and the ongoing analysis of the incomingsignal to determine the required gains in the differ-ent frequency regions.

MethodsThe digital master hearing aid consists of a DSP

board for a 386-PC (Burr Brown PCI 20202C-1 withTMSC320C25) with analog-to-digital (A/D) anddigital-to-analog (D/A) converters . Programmablefilters and attenuators (custom-made PC-board) areused to prevent aliasing and to control the levels ofthe signals . In addition to the laboratory version, awearable device was built which has been tested inpreliminary field trials . The processing principle issimilar to the frequency domain approach describedby Levitt (24) and is illustrated in Figure 5 .

Magnitude estimation procedures are used todetermine loudness growth functions in eight fre-quency bands that are subsequently interpolated toobtain an estimate of the auditory field of a subject.Sinewave bursts of 8-10 different intensities withinthe hearing range are presented in random order.After each presentation, the subject judges theloudness of the sound on a continuous scale withannotated labels ranging from very soft to veryloud. The responses are entered via a touch screenterminal . The whole procedure is repeated at eightdifferent frequencies.

Figure 5 displays an example of such a magni-tude estimation. It can be seen that the function ofthe hearing-impaired subject starts at a much higherintensity and is much steeper than the curve of thenormal-hearing group . The difference between thetwo curves provides the intensity dependent hearingloss function at this frequency.

The sound signal is sampled at a rate of 10kHz, segmented into windowed blocks of 12 .8 msand transformed via fast Fourier transform (FFT)into the frequency domain . The amplitude spectrumis modified according to preprogrammed gain ta-bles . The modified spectrum is then transformedback into the time domain via an inverse FFT andreconstructed via an overlap-add procedure . Themodification of the spectrum is done by multiplyingeach amplitude value with a gain factor from one ofthe gain tables . The selection of the tables iscontrolled by the amplitude values in the differentfrequency regions. Thus, the signal is nonlinearlyamplified as a function of frequency and intensity.

DD..) LPF

5 kHz

ADC

modification ofthe short-time

amplitude spectru

Reconstruction

overlap-addFFT

FFT

(n=128)

Scaled loudnesslookup-tables very loud

with C gain (70 dB)loud -loudness gain factors

normal hearingestimation medium -

soft -smoothing of index calculation hearing

for lookup-tables impairedlog .ampl .spectrum very soft

0 20 40 60 80 100 120Intensity [dB SPL]

Figure 5.Processing steps for digital multiband loudness correction algorithm .

Page 7: Digital signal processing (DSP) applications for multiband ...

101

Section III . Digital Techniques Applied to Related Areas : Dillier et at.

Gain tables are generated by interpolation ofthe measured hearing loss measurements over thewhole frequency range . These functions describehow much gain is needed for every frequency torestore normal loudness perception . To prevent aspectrum flattening through independent modifica-tion of adjacent amplitudes, a smoothed spectrum isused to determine the gain factors . Thus, the finestructure of the spectrum will be preserved.

However, by determining the gain factors di-rectly from the amplitude values, the reconstructedsignal would generally become too loud, because thehearing loss functions were measured using narrowband signals and the incoming sounds consist mainlyof broadband signals . The procedure was thereforemodified to account for the loudness perception ofcomplex sounds as follows (see Figure 6 for anexample of the processing steps) : the frequency (fsin)and amplitude (Ling) of a sinewave signal areestimated which would produce a perceived loudnessequal to that of the incoming complex signal . In thissimple loudness estimation model, the intensity ofthis equivalent sinewave signal is determined as thetotal energy of the complex signal and its frequencyas the frequency of the spectral gravity of theamplitude spectrum.

The spectral gravity was chosen because italways lies near the maximal energy concentrationwhich is mainly responsible for loudness perception.The correct gain factors which will restore normalloudness perception (SLN) are thus obtained byusing the raised smoothed spectrum through thepoint of the equivalent sinewave signal.

Evaluation ResultsThe algorithm was tested with 13 hearing-

impaired subjects in comparison with their ownhearing aids under four different conditions in orderto find out whether the scaling and fitting proce-dures were useful approaches for hard-of-hearingsubjects and whether or how much the speechintelligibility could be improved with the multibandloudness correction (MLC) algorithm in quiet andnoisy environments . The subjects had only minimalexperience with the laboratory hearing aid, whereasthey had used their own aids for more than a year.The function and fitting of the hearing aids werechecked prior to the speech tests . Even though somehearing aids were equipped with an automatic gaincontrol, the threshold of compression was set higher

than the levels used in the experiments . Subjectswith moderate to severe flat sensorineural hearingloss were selected, which allowed simultaneousprocessing of the whole frequency range with suffi-cient numerical resolution . The same test procedurewas used as in the previous section.

Figure 7 displays speech intelligibility scores fora consonant and vowel multiple choice rhyme test.At the higher intensity level (70 dB, open squares),which would correspond to an average speechcommunication level in quiet rooms, the scores withthe subject's own hearing aids were rather high,especially in the vowel test . The difference betweenthe new algorithm and the subject's own hearing aidwas about 10 percent.

At reduced presentation levels (60 dB, opencircles) which would be characteristic for a moredifficult communication situation, with a speakertalking rather softly or from a more remote posi-tion, the scores with the conventional hearing aidsbecame significantly lower for most subjects . Theadvantage of the loudness correction becomes moreapparent . Almost all subjects achieved scores offrom about 80 to 90 percent correct-item-identifi-cation, which was about as high as in the 70 dBcondition. Subjects with very poor results with theirown hearing aids profited most from the digitalhearing aid.

In noisy conditions where the hearing aid usersexperience most communication problems at anSNR of 0 and – 5 dB (filled triangles and diamonds,respectively) the results were different from subjectto subject . Most of them achieved higher scores withthe digital hearing aid . Two subjects scored slightlyworse with the DSP algorithm . One subject couldnot discriminate the words under this condition withher own hearing aid ; with the MLC algorithm sheobtained a score close to 50 percent.

ConclusionsThe automated audiometric procedures to as-

sess the frequency and intensity specific amount ofhearing loss proved to be adequate for the calcula-tion of processing parameters . The 13 subjects didnot experience major problems in carrying out themeasurements . All subjects who had not reached100 percent intelligibility with their own hearing aidsshowed improved intelligibility of isolated wordswith the digital processing, especially at low levels.Some subjects also showed substantial discrimina-

Page 8: Digital signal processing (DSP) applications for multiband ...

102

Journal of Rehabilitation Research and Development Vol . 30 No . 1 1993

2048

2048

/A

A/

/A

A/

-1024 -

-2048

80p dB

1024

-1024

np d B]-2048

80

4

3

2

SI.,,,

1

60

0

2048

40

20

-2°48 BEd = 6.0 SEC

END ' = 1 .125 §EC -2°48 BEG' = 0 .0 SEC

END = 1 .600 §EC

Figure 6.Input time signals (top), input levels Lin , (corresponding to perceived loudness of narrow band signal), spectral gravity ( sin ,

equivalent scaled loudness SLN, processed output signals (bottom) . Speech tokens /A-MA/ (left) and /A-SA/ (right) .

Page 9: Digital signal processing (DSP) applications for multiband ...

103

Section III . Digital Techniques Applied to Related Areas : Dillier et al.

Consonant test scores

20

60

q 70 dB

q 60 dB

• SNR O dB

• SNR -5 dB

0

20

40

60

80

100

Conventional Hearing Aid

Figure 7.Comparison of speech intelligibility tests with own conventionalhearing aid and digital MLC hearing aid for 13 subjects . Opensymbols : tests in quiet; filled symbols : tests in speech spectrumshaped noise . a) Consonant test scores . b) Vowel test scores.

tion improvements in background noise, while oth-ers did not . A correlation between the hearing lossfunctions and the speech intelligibility scores was

not found. Other phenomena, such as reducedfrequency and temporal resolution, might have to beconsidered in addition to loudness recruitment . Itcan be expected that fine tuning of processingparameters via, for example, a modified simplexprocedure (25), or prolonged experience with awearable signal processing hearing aid might furtherimprove the performance of the MLC algorithm.

DIGITAL SIGNAL PROCESSING STRATEGIESFOR COCHLEAR IMPLANTS

Major research and development efforts torestore auditory sensations and speech recognitionfor profoundly deaf subjects have been devoted inrecent years to signal processing strategies forcochlear implants . A number of technological andelectrophysiological constraints imposed by the ana-tomical and physiological conditions of the humanauditory system have to be considered. One basicworking hypothesis for cochlear implants is the ideathat the natural firing pattern of the auditory nerveshould be as closely approximated by electricalstimulation as possible . The central processor (thehuman brain) would then be able to utilize natural("prewired" as well as learned) analysis modes forauditory perception . An alternative hypothesis is theMorse code idea, which is based on the assumptionthat the central processor is flexible and able tointerpret any transmitted stimulus sequence afterproper training and habituation.

Both hypotheses have never really been testedfor practical reasons . On the one hand, it is notpossible to reproduce the activity of 30,000 individ-ual nerve fibers with current electrode technology.In fact, it is even questionable whether it is possibleto reproduce the detailed activity of a single audi-tory nerve fiber via artificial stimulation . There area number of fundamental physiological differencesin firing patterns of acoustically versus electricallyexcited neurons which are hard to overcome (26).Spread of excitation within the cochlea and currentsummation are other major problems of mostelectrode configurations . On the other hand, thecoding and transmission of spoken language re-quires a much larger communication channel band-width and more sophisticated processing than aMorse code for written text . Practical experienceswith cochlear implants in the past indicate that some

80

100 q 0 q Oq 0

q

q q m

0

q0 0

♦.• A

0

q

•A

q 70 dB

q 60 dB

• SNR O dB

• SNR-5 dB

20

40

60

80

100

Conventional Hearing Aid

Vowel test scores

100-

80

40

20

O

0.q £ 0 ♦ , ' q

q 0 q00

q q O".0

. q0

OA

0

Page 10: Digital signal processing (DSP) applications for multiband ...

104

Journal of Rehabilitation Research and Development Vol . 30 No. 1 1993

natural relationships (such as growth of loudnessand voice pitch variations) should be maintained inthe encoding process . One might therefore conceivea third, more realistic, hypothesis as follows : Signalprocessing for cochlear implants should carefullyselect a subset of the total information contained inthe sound signal and transform these elements intothose physical stimulation parameters which cangenerate distinctive perceptions for the listener.

Many researchers have designed and evaluateddifferent systems varying the number of electrodesand the amount of specific speech feature extractionand mapping transformations used (27) . Recently,Wilson, et al . (28) reported astonishing improve-ments in speech test performance when they pro-vided their subjects with high-rate pulsatile stimula-tion patterns rather than analog broadband signals.They attributed this effect partly to the decreasedcurrent summation obtained by nonsimultaneousstimulation of different electrodes (which mightotherwise have stimulated in some measure the samenerve fibers and thus interacted in a nonlinearfashion) and partly to a fundamentally different,and possibly more natural, firing pattern due to anextremely high stimulation rate. Skinner, et al . (29)also found significantly higher scores on word andsentence tests in quiet and noise with a newmultipeak digital speech coding strategy as com-pared with the formerly used FOF1F2-strategy of theNucleus-WSP (wearable speech processor).

These results indicate the potential gains whichmay be obtained by optimizing signal processingschemes for existing implanted devices . The presentstudy was conducted in order to explore new ideasand concepts of multichannel pulsatile speech en-coding for users of the Clark/Nucleus cochlearprosthesis. Similar methods and tools can, however,be utilized to investigate alternative coding schemesfor other implant systems.

Signal Processing StrategiesA cochlear implant digital speech processor

(CIDSP) for the Nucleus 22-channel cochlear pros-thesis was designed using a single-chip digital signalprocessor (TMS320C25, Texas Instruments) (30,31).For laboratory experiments, the CIDSP was incor-porated in a general purpose computer which pro-vided interactive parameter control, graphical dis-play of input/output and buffers, and offline speech

file processing facilities . The experiments describedin this paper were all conducted using the laboratoryversion of CIDSP.

Speech signals were processed as follows : afteranalog low-pass filtering (5 kHz) and A/D conver-sion (10 kHz), preemphasis and Hanning windowing(12.8 ms, shifted by 6 .4 ms or less per analysisframe) was applied and the power spectrum calcu-lated via FFT; specified speech features such asformants and voice pitch were extracted and trans-formed according to the selected encoding strategy;finally, the stimulus parameters (electrode position,stimulation mode, and pulse amplitude and dura-tion) were generated and transmitted via inductivecoupling to the implanted receiver . In addition tothe generation of stimulus parameters for thecochlear implant, an acoustic signal based on aperceptive model of auditory nerve stimulation wasoutput simultaneously.

Two main processing strategies were imple-mented on this system : The first approach, PitchExcited Sampler (PES), is based on the maximumpeak channel vocoder concept whereby the time-averaged spectral energies of a number of frequencybands (approximately third-octave bands) are trans-formed into appropriate electrical stimulation pa-rameters for up to 22 electrodes (Figure 8, top). Thepulse rate at any given electrode is controlled by thevoice pitch of the input speech signal . A pitchextractor algorithm calculates the autocorrelationfunction of a lowpass-filtered segment of the speechsignal and searches for a peak within a specifiedtime lag interval . A random pulse rate of about 150to 250 Hz is used for unvoiced speech portions.

The second approach, Continuous InterleavedSampler (CIS), uses a stimulation pulse rate that isindependent of the fundamental frequency of theinput signal. The algorithm continuously scans allfrequency bands and samples their energy levels(Figure 8, middle and bottom). Since only oneelectrode can be stimulated at any instant of time,the rate of stimulation is limited by the requiredstimulus pulse widths (determined individually foreach subject) and the time to transmit additionalstimulus parameters. As the information about theelectrode number, stimulation mode, and pulseamplitude and width is encoded by high frequencybursts (2 .5 MHz) of different durations, the totaltransmission time for a specific stimulus depends onall of these parameters . This transmission time can

Page 11: Digital signal processing (DSP) applications for multiband ...

105

Section III . Digital Techniques Applied to Related Areas : Dillier et at.

Figure 8.Three CI-DSP strategies : pitch excited sampler (PES), continu-ous interleaved sampler with narrow band analysis (CIS-NA),continuous interleaved sampler with wide band analysis andfixed tonotopic mapping (CIS-WF).

be minimized by choosing the shortest possible pulsewidth combined with the maximal amplitude.

In order to achieve the highest stimulation ratesfor those portions of the speech input signals whichare assumed to be most important for intelligibility,several modifications of the basic CIS strategy weredesigned, of which only the two most promising(CIS-NA and CIS-WF) will be considered in thefollowing . The analysis of the short time spectra wasperformed either for a large number of narrowfrequency bands (corresponding directly to thenumber of available electrodes) or for a smallnumber (typically six) of wide frequency bandsanalogous to the approach suggested by Wilson, etal . (28) . The frequency bands were logarithmically

spaced from 200 to 5,000 Hz in both cases . Spectralenergy within any of these frequency bands wasmapped to stimulus amplitude at a selected electrodeas follows: all narrow band analysis channels whosevalues exceeded a noise cut level (NCL) were usedfor CIS-NA, whereas all wide band analysis chan-nels irrespective of NCL were mapped to preselectedfixed electrodes for CIS-WF . Both schemes aresupposed to minimize electrode interactions bypreserving maximal spatial distances between subse-quently stimulated electrodes . The first scheme(CIS-NA) emphasizes spectral resolution while thesecond (CIS-WF) optimizes fine temporal resolu-tion. In both the PES and the CIS strategies, ahigh-frequency preemphasis was applied whenever aspectral gravity measure exceeded a preset threshold.

SubjectsEvaluation experiments have been conducted

with five postlingually deaf adult (ages 26-50 years)cochlear implant users to date . All subjects wereexperienced users of their speech processors . Thetime since implantation ranged from 5 months (KW)to nearly 10 years (UT, single-channel extracochlearimplantation in 1980, reimplanted after device fail-ure in 1987) with good sentence identification (80-95percent correct responses) and number recognition(40-95 percent correct responses) performance andminor open speech discrimination in monosyllabicword tests (5-20 percent correct responses, all testspresented via computer, hearing-alone) and limiteduse of the telephone. One subject (UT) still used theold wearable speech processor (WSP) which extractsonly the first and second formant and thus stimu-lates only two electrodes per pitch period . The otherfour subjects used the new Nucleus MiniatureSpeech Processor (MSP) with the so-calledmultipeak strategy whereby, in addition to first andsecond formant information, two or three fixedelectrodes may be stimulated to convey informationcontained in two or three higher frequency bands.

The same measurement procedure to determinethresholds of hearing (T-levels) and comfortablelistening (C-levels) used for fitting the WSP or MSPwas also used for the CIDSP strategies . Onlyminimal exposure to the new processing strategieswas possible due to time restrictions . After about 5to 10 minutes of listening to ongoing speech, 1 or 2blocks of a 20-item 2-digit number test were carriedout . There was no feedback given during the test

Page 12: Digital signal processing (DSP) applications for multiband ...

106

Journal of Rehabilitation Research and Development Vol . 30 No . 1 1993

trials . All test items were presented by a secondcomputer which also recorded the responses of thesubjects entered via touch-screen terminal (for mul-tiple-choice tests) or keyboard (numbers tests andmonosyllable word tests) . Speech signals were eitherpresented via loudspeaker in a sound-treated room(when patients were tested with their WSPs) orprocessed by the CIDSP in real time and fed directlyto the transmitting coil at the subject's head.Different speakers were used for the ongoingspeech, the numbers test, and the actual speechtests, respectively.

Results and DiscussionResults of 12-consonant (/aCa/) and 8-vowel

(/dV/) identification tests are shown in Figure 9.The average scores for consonant tests (Figure 9a)with the subject's own wearable speech processorwere significantly lower than with the new CIDSPstrategies . The pitch-synchronous coding (PES) re-sulted in worse performance compared with thecoding without explicit pitch extraction (CIS-NAand CIS-WF) . Vowel identification scores (Figure9b), on the other hand, were not improved bymodifications of the signal processing strategy.

The results for subject HS indicated that PESmay provide better vowel identification for somesubjects, while CIS was better for consonant identi-fication for all tested subjects . It is possible that CISis able to present the time-varying spectral informa-tion associated with the consonants better than canPES. Results from a male-female speaker identifica-tion test indicated that no speaker distinctions couldbe made using CIS, unlike PES which yielded verygood speaker identification scores . This suggestedthat voice pitch information was well transmittedwith PES but not at all with CIS.

Hybrids of the two strategies listed above were,therefore, developed with the aim of combining therespective abilities of PES and CIS in transmittingvowel and consonant information, as well as retain-ing PES' ability to transmit voice pitch informationwith the resultant hybrids . In one hybrid, thestimulation was switched between PES and CISrespectively, depending on whether the input speechsignal was voiced or not . In another hybrid, forvoiced portions of the speech, the lowest frequencyactive electrode was stimulated using PES while theremaining active electrodes were, at the same time,stimulated using CIS . For unvoiced portions, all

Figure 9.Speech test results with five implantees, four processing condi-tions . a) Consonant test results . b) Vowel test results.

active electrodes used CIS stimulation . A thirdhybrid is a variation on the one above, using PES onthe two lowest frequency active electrodes instead ofonly one.

Figure 10 shows the differences in electrodeactivation patterns between the Nucleus-MSP(Figure l0a) and the PES/CIS hybrid strategy(Figure 10b) . The graphs were generated from directmeasurements of the encoded high frequency trans-mission signals emitted by the induction coil of thespeech processor . Using the subject's programmingmap, which determines the relationships betweenstimulus amplitude and perceived loudness, relativestimulus levels (ranging from 0 to 100 percent) wereobtained which are mapped into a color (or blackand white) scale for the purpose of illustration. Theaxes of the graphs have been arranged similar to aspectrogram, with time on the abscissa and fre-quency (tonotopical order of active electrodes) onthe ordinate . A number of striking differencesbetween the two coding schemes became apparent

% correct responses (CI-corrected)

Consonant test .(C12)100 _ .__

_ ...__—OMSP/WSP MOPES ®01S-NA ®CIS-WF)

UT

TH

HS

SA

OMSP/WSP OWES 1101S-NA .CIS-WF)

Page 13: Digital signal processing (DSP) applications for multiband ...

107

Section III . Digital Techniques Applied to Related Areas : Dither et al.

/leiber/

200 ms

900 as

600 ms

/lei der /

I .IlIliil_LIIIII_liiiIIIliii 1111.11111

t I I I I I IL11~},y{-~J~I~ytIL1

11111111I_I

1

II11 I

13

15

17

19

21

200 ms

900 ms

600 ms

15

17

19

21

3

7

9

I`IIIiIIII 11111 II

/I el bE /

I1 :

II li IEIIIIIIIIIIIII1111111®I

Illillll ~ ____

13

15

17

19

21

900 ms

15

17

19

III 1 t I 121

I

11111 111111111111111

600 ms

111111111111111Ill

200 ms

900 ms

600 ms

Figure 10."Electrodograms" of the phonological consonant minimal pair /leiber - leider/ . a) Processed by the Nucleus-MSP . b) Processed by anew hybrid processing strategy where the lowest electrode is excited pitch-synchronously (PES-mode) and the higher electrodes at

maximal rate (CIS-mode) . The start of the voiced plosive sounds at about 450 ms is indicated by arrows.

from these displays . The first and second formanttrajectories can be clearly seen in the MSP examples.Due to the smaller number of electrodes assigned tothe first formant region in the case of the hybridstrategy, the variations in first formant frequencyare less apparent and cover only three electrodescompared with six electrodes with the MSP . Therate of stimulation, however, is virtually identicalfor the lowest trajectory in both cases . The twohighest stimulus trajectories in the MSP are assignedto fixed electrode numbers (4 and 7), whereas in thehybrid strategy, there is distinctly more variationvisible in those areas than would be expected. Themost striking difference between the MSP and thehybrid strategies, however, is the high-rate CIS

portion with rather large variations in stimulus levelsacross electrodes and over time, as opposed to themore uniform and scarce stimulus pattern for theMSP. It can be seen that a distinction between thevoiced plosive phonemes /b/ and /d/ is virtuallyimpossible for the MSP, whereas with the hybrid (orCIS) the second formant transition between about460 and 490 msec becomes visible and might beutilized by the implantees.

Preliminary experiments with these hybrid cod-ing strategies have indeed shown some improvementin consonant identification . The results suggested,however, that the presence of voicing within time-varying portions could interfere with the perceptionof the CIS encoded information . Implantees were

Page 14: Digital signal processing (DSP) applications for multiband ...

108

Journal of Rehabilitation Research and Development Vol . 30 No. 1 1993

able to perform male-female speaker identifications SUMMARYquite well with all hybrids, indicating that the voicepitch information encoded into the PES stimulationwithin the hybrids can be perceived and used. Thisimplies that by activating one or two low frequencyelectrodes using PES, it is possible to incorporatevoice pitch information into an essentially CIS-likecoding strategy.

ConclusionsThe above speech test results are still prelimi-

nary due to the small number of subjects and testconditions . It is, however, very promising that newsignal processing strategies can improve speechdiscrimination considerably during acute laboratoryexperiments . Consonant identification apparentlymay be enhanced by more detailed temporal infor-mation and specific speech feature transformations.Whether these improvements will pertain in thepresence of interfering noise remains to be verified.

The experiments using hybrid coding strategiescombining PES and CIS stimulation indicate thatthere is even more potential for improvement of thetransmission of information regarding particularconsonants . Further studies will have to be carriedout to investigate in greater detail the perceptualinteractions that arise between PES and CIS stimu-lation within the resultant stimuli . The results alsoshow that voice pitch information, added to aCIS-like coding strategy by encoding it into one ortwo low frequency electrode(s), can be perceived andused by a cochlear implant user . Further optimiza-tion of these processing strategies preferably shouldbe based on more specific data about loudnessgrowth functions for individual electrodes or addi-tional psychophysical measurements.

Although many aspects of speech encoding canbe efficiently studied using a laboratory digitalsignal processor, it would be desirable to allowsubjects more time for adjustment to a new codingstrategy. Several days or weeks of habituation aresometimes required until a new mapping can be fullyexploited . Thus, for scientific as well as practicalpurposes, the further miniaturization of wearableDSPs will be of great importance .

The application of digital signal processors forthe hearing impaired was demonstrated with threeexamples:

• Optimal parameters for a two-microphone adap-tive beamformer noise reduction algorithm weredetermined by simulations and measurements forrealistic environments . The experimental evaluationwas carried out with normal-hearing and hearing-impaired subjects using a real time implementationon a floating point DSP . Results indicated thepotential benefit of this method even in moderatelyreverberant rooms when a speech detection algo-rithm is included and an adequate filter length forthe least mean squares (LMS) algorithm is used.• A digital hearing aid with a multiband frequencydomain modification of the short-time amplitudespectrum was fitted using loudness scaling measure-ments and evaluated with 13 hard-of-hearing sub-jects . The measurements that used narrow bandsignals were converted into gain factors via asimplified loudness estimation model based on thecalculation of the spectral gravity point and the totalenergy within a short segment of the input signal.Significant improvements in speech recognitionscores were found in quiet as well as in noisyconditions.• A variety of high-rate stimulation algorithms wereimplemented on an experimental digital signal pro-cessor for the Nucleus multielectrode cochlear im-plant and compared with the performance of thepatient's own wearable and miniature speech proces-sor. It was found that consonant identificationcould be significantly improved with the new strate-gies in contrast to vowel identification, whichremained essentially unchanged. Degradation inperceived sound quality due to loss of voice pitchinformation may be compensated by hybrid strate-gies.

ACKNOWLEDGMENTS

This study was supported by the Swiss NationalResearch Foundation (Grant Nos . 4018-10864 and 4018-

10865), by Ascom-Research (Hombrechtikon), and

Page 15: Digital signal processing (DSP) applications for multiband ...

109

Section III . Digital Techniques Applied to Related Areas : Dillier et al.

Cochlear AG (Basel) . We are also very grateful to all thesubjects who have participated in sometimes boring andtiring experiments.

REFERENCES

1. Working Group on Communication Aids for the Hearing-Impaired . Speech-perception aids for hearing-impairedpeople : current status and needed research . J Acoust SocAm 1991 ;90 :637-86.

2. Boll SF. Suppression of acoustic noise in speech usingspectral subtraction . IEEE Trans Acoust Speech SignalProcess 1979;27 :113-20.

3. Graupe D, Grosspietsch JK, Basseas SP . A single-microphone-based self-adaptive filter of noise fromspeech and its performance evaluation . J Rehabil Res Dev1987 ;24(4) :119-26.

4. Lim JS, Oppenheim AV . Enhancement and bandwidthcompression of noisy speech . Proc IEEE 1979;67 :1586-1604.

5. Van Tasell DJ, Larsen SY, Fabry DA. Effects of anadaptive filter hearing aid on speech recognition in noiseby hearing-impaired subjects . Ear Hear 1988 ;9(1) :15-21.

6. Vary P . Noise suppression by spectral magnitude estima-tion—mechanism and theoretical limits . Signal Process1985 ;8 :387-400.

7. Griffiths LJ . An adaptive lattice structure for noise-cancelling applications . IEEE ICASSP Proc 1978 :87-90.

8. Peterson PM, Durlach NI, Rabinowitz WM, Zurek PM.Multimicrophone adaptive beamforming for interferencereduction in hearing aids . J Rehabil Res Dev 1987;24(4) :103-10.

9. Chabries DM, Christiansen RW, Brey RH, Robinette MS,Harris RW . Application of adaptive digital signal process-ing to speech enhancement for the hearing impaired . JRehabil Res Dev 1987 ;24(4) :65-74.

10. Van Compernolle D . Hearing aids using binaural process-ing principles . Acta Otolaryngol (Stockh) 1990 ;Suppl469 :76-84.

11. Greenberg JE, Zurek PM . Evaluation of an adaptivebeamforming method for hearing aids . J Acoust Soc Am1992 ;91(3) : 1662-76.

12. Kompis M, Dillier N . Noise reduction for hearing aids:evaluation of the adaptive beamformer approach . ProcIEEE EMBS 1991 ;13(4) :1887-8.

13. Widrow B, McCool JM, Johnson CR. Stationary andnonstationary learning characteristics of the LMS adap-tive filter . Proc IEEE 1976 ;64 :1151-62.

14. Miller GA, Nicely PE. An analysis of perceptual confu-sions among some English consonants . J Acoust Soc Am1955 ;27 :338-52.

15. Barfod J . Multichannel compression hearing aids : experi-

ments and consideration on clinical applicability . In:Ludvigsen C, Barfod J, editors . Sensorineural hearingimpairment and hearing aids . Scand Audiol 1978 ;Suppl6 :316-40.

16. Levitt H . Digital hearing aids : a tutorial review . J RehabilRes Dev 1987 ;24(4) :7-20.

17. Bustamante DK, Braida LD . Principal-component ampli-tude compression for the hearing impaired. J Acoust SocAm 1987 ;82 :1227-42.

18. Dillier N . Programmable master hearing aid with adaptivenoise reduction using a TMS320-20. IEEE ICASSP Proc1988 :2508-11.

19. Levitt H, Neuman AC . Evaluation of orthogonal polyno-mial compression . J Acoust Soc Am 1991 ;90(1) :241-52.

20. Lippmann RP, Braida LD, Durlach NI . Study ofmultichannel amplitude compression and linear amplifica-tion for persons with sensorineural hearing loss . J AcoustSoc Am 1981;69 :524-34.

21. Staab WJ . Review article : digital/programmable hearingaids—an eye towards the future. Brit J Audiol1990 ;24 :243-56.

22. van Dijkuizen JN, Festen JM, Plomp R . The effect offrequency-selective attenuation on the speech-receptionthreshold of sentences in conditions of low-frequencynoise . J Acoust Soc Am 1991 ;90(2) :885-94.

23. Villchur E . The rationale for multi-channel amplitudecompression combined with post-compression equaliza-tion . Scand Audiol 1978 ;Suppl 6 :163-77.

24. Levitt H, Neuman A, Mills R, Schwander T . A digitalmaster hearing aid . J Rehabil Res Dev 1986;23(1) :79-87.

25. Neuman AC, Levitt H, Mills R, Schwander T. Anevaluation of three adaptive hearing aid selection strate-gies . J Acoust Soc Am 1987 ;82 :1967-76.

26. Evans EF. How to provide speech through an implantdevice . Dimensions of the problem : an overview . In:Schindler RA, Merzenich MM, editors . Cochlear im-plants . New York: Raven Press, 1985 : 167-83.

27. Clark GM, Tong YC, Patrick JF. Cochlear prostheses.New York: Churchill Livingstone, 1990: 1-264.

28. Wilson BS, Lawson DT, Finley CC, Wolford RD . Codingstrategies for multichannel cochlear prostheses . Am JOtol 1991;12(Suppl 1) :55-60.

29. Skinner MW, Holden LK, Holden TA, Dowell RC, et al.Performance of postlingually deaf adults with the wear-able speech processor (WSP III) and mini speech proces-sor (MSP) of the Nucleus Multi-Electrode CochlearImplant . Ear Hear 1991 ;12(1) :3-22.

30. Dillier N, Senn C, Schlatter T, Stockli M . Wearabledigital speech processor for cochlear implants using aTMS320C25 . Acta Otolaryngol (Stockh) 1990 ;Suppl469 :120-7.

31. Bogli H, Dillier N. Digital speech processor for theNucleus 22-channel cochlear implant . Proc IEEE EMBS1991 ;13(4) :1901-2 .