[IEEE TENCON 2008 - 2008 IEEE Region 10 Conference (TENCON) - Hyderabad, India...

6
Reliability of Facial Muscle Activity to Identify Vowel Utterance # Ganesh R Naik 1 , Dinesh K Kumar 1 , Sridhar P Arjunan 1 1 School of Electrical and Computer Engineering, RMIT University GPO BOX 2476V, Victoria-3001, Australia [email protected] Abstract—This paper evaluates the reliability of the use of muscle activation during unuttered (silent) vowel by an individual and reports the study of repeating of the experiments over several days. Surface electromyogram has been used as an indicator of muscle activity and independent component analysis (ICA) has been used to separate the electrical activity from different muscles. The results demonstrate that there is ‘reasonable’ rela- tionship between muscle activities of the corresponding muscles when the experiments are repeated. The results also indicate that there is a variation when the same sound is spoken at different speeds of utterances. This can be attributable to the lack of audio feedback when the same sound is uttered. This analysis will be useful for the research undertaking facial surface electromyography (sEMG) for speech analysis, which is being considered as a potential human computer interface (HCI) for people suffering from motor disabilities. I. I NTRODUCTION When we speak in noisy environments, or with people with hearing loss, the lip and facial movements often compensate the lack of audio quality. The identification of the speech with lip movement can be achieved using visual sensing, or sensing of the movement and shape using mechanical sensors [1] or by relating the movement and shape to the muscle activity [2],[3]. Each of these techniques has strengths and limitations. The video based technique is often computationally expensive, requires a camera monitoring the lips that is fixed to the user’s head, and is sensitive to lighting conditions. The sensor based technique has the obvious disadvantage that it requires the user to have sensors fixed to the face, making the system not user friendly. The muscle monitoring systems have limitations of low reliability. The other difficulty of each of these systems is that these systems are user dependent and not suitable for being used by multiple users. Surface Electromyography (sEMG) is a surface recording of the muscle activity. It is a result of the spatial and tem- poral integration of the motor unit action potential (MUAP) originating from different motor units. Being non-invasive and an important indicator of muscle activity, sEMG is useful, but the presence of multiple muscle activity and random nature of the transmission path makes the signal difficult to use reliably when muscle activity is small, and actions are complex. It is difficult to separate muscle activity originating from different muscles due to similarity in the signals. To overcome the issue of reliability by earlier work by the authors, issue of cross talk has been considered and eliminated using independent component analysis (ICA). ICA is a technique suitable for blind source separation- to separate signals from different sources from the mixture. ICA algorithms have been considered to be information theory based unsupervised learning rules. Given a set of multidimen- sional observations, which are assumed to be linear mixtures of unknown independent sources through an unknown mixing source, an ICA algorithm performs a search of the un-mixing matrix by which observations can be linearly translated to form independent output components. ICA can be employed in unsupervised situations and this makes it very attractive for number of applications especially into audio and bio signal applications. Muscle activity originating from different muscles can be considered to be independent, and this gives an argument to the use of ICA for separation of muscle activity originating from the different muscles. Earlier work by the author had tested ICA for separation of the EMG signals for the purpose of identifying hand gestures and actions [4]. ICA has been proposed for unsupervised cross talk removal from sEMG recordings of the muscles of the hand [5]. Research that isolates MUAP originating from different muscles and motor units has been reported in 2004 [6]. A denoising method using ICA and high-pass filter banks has been used to suppress the interference of electrocardiogram (ECG) in EMG recorded from trunk muscles [7]. This paper reports the results of experiments conducted to study the reliability of using facial muscle activity for identifying unutterd speech. The aim of this work is to test the hypothesis that when people speak, they repeat the muscle activation and hence the muscle activity is a good indicator of what they are speaking. ICA has been used to separate the muscle activity from different facial muscles and features of these components have been classified to ensure the cross- talk issue due to multiple active muscles is reduced. There are numerous applications of this research-including determining whether the individual was speaking normally, and towards developing a new human computer interface (HCI) tool for people with special requirements. II. THEORY The first step in the classification of facial sEMG recording is to determine the role of the facial muscles in the production of speech. There are various speech production models that describe the mechanisms of speech productions. These models

Transcript of [IEEE TENCON 2008 - 2008 IEEE Region 10 Conference (TENCON) - Hyderabad, India...

Page 1: [IEEE TENCON 2008 - 2008 IEEE Region 10 Conference (TENCON) - Hyderabad, India (2008.11.19-2008.11.21)] TENCON 2008 - 2008 IEEE Region 10 Conference - Reliability of facial muscle

Reliability of Facial Muscle Activity to IdentifyVowel Utterance

# Ganesh R Naik 1, Dinesh K Kumar 1, Sridhar P Arjunan 1

1 School of Electrical and Computer Engineering, RMIT University

GPO BOX 2476V, Victoria-3001, Australia

[email protected]

Abstract—This paper evaluates the reliability of the use ofmuscle activation during unuttered (silent) vowel by an individualand reports the study of repeating of the experiments over severaldays. Surface electromyogram has been used as an indicatorof muscle activity and independent component analysis (ICA)has been used to separate the electrical activity from differentmuscles. The results demonstrate that there is ‘reasonable’ rela-tionship between muscle activities of the corresponding muscleswhen the experiments are repeated. The results also indicatethat there is a variation when the same sound is spoken atdifferent speeds of utterances. This can be attributable to thelack of audio feedback when the same sound is uttered. Thisanalysis will be useful for the research undertaking facial surfaceelectromyography (sEMG) for speech analysis, which is beingconsidered as a potential human computer interface (HCI) forpeople suffering from motor disabilities.

I. INTRODUCTION

When we speak in noisy environments, or with people withhearing loss, the lip and facial movements often compensatethe lack of audio quality. The identification of the speech withlip movement can be achieved using visual sensing, or sensingof the movement and shape using mechanical sensors [1] orby relating the movement and shape to the muscle activity[2],[3]. Each of these techniques has strengths and limitations.The video based technique is often computationally expensive,requires a camera monitoring the lips that is fixed to the user’shead, and is sensitive to lighting conditions. The sensor basedtechnique has the obvious disadvantage that it requires the userto have sensors fixed to the face, making the system not userfriendly. The muscle monitoring systems have limitations oflow reliability. The other difficulty of each of these systemsis that these systems are user dependent and not suitable forbeing used by multiple users.

Surface Electromyography (sEMG) is a surface recordingof the muscle activity. It is a result of the spatial and tem-poral integration of the motor unit action potential (MUAP)originating from different motor units. Being non-invasive andan important indicator of muscle activity, sEMG is useful, butthe presence of multiple muscle activity and random nature ofthe transmission path makes the signal difficult to use reliablywhen muscle activity is small, and actions are complex. It isdifficult to separate muscle activity originating from differentmuscles due to similarity in the signals. To overcome the issueof reliability by earlier work by the authors, issue of crosstalk has been considered and eliminated using independent

component analysis (ICA).ICA is a technique suitable for blind source separation-

to separate signals from different sources from the mixture.ICA algorithms have been considered to be information theorybased unsupervised learning rules. Given a set of multidimen-sional observations, which are assumed to be linear mixturesof unknown independent sources through an unknown mixingsource, an ICA algorithm performs a search of the un-mixingmatrix by which observations can be linearly translated toform independent output components. ICA can be employedin unsupervised situations and this makes it very attractivefor number of applications especially into audio and biosignal applications. Muscle activity originating from differentmuscles can be considered to be independent, and this gives anargument to the use of ICA for separation of muscle activityoriginating from the different muscles. Earlier work by theauthor had tested ICA for separation of the EMG signals forthe purpose of identifying hand gestures and actions [4]. ICAhas been proposed for unsupervised cross talk removal fromsEMG recordings of the muscles of the hand [5]. Researchthat isolates MUAP originating from different muscles andmotor units has been reported in 2004 [6]. A denoising methodusing ICA and high-pass filter banks has been used to suppressthe interference of electrocardiogram (ECG) in EMG recordedfrom trunk muscles [7].

This paper reports the results of experiments conductedto study the reliability of using facial muscle activity foridentifying unutterd speech. The aim of this work is to testthe hypothesis that when people speak, they repeat the muscleactivation and hence the muscle activity is a good indicatorof what they are speaking. ICA has been used to separate themuscle activity from different facial muscles and features ofthese components have been classified to ensure the cross-talk issue due to multiple active muscles is reduced. There arenumerous applications of this research-including determiningwhether the individual was speaking normally, and towardsdeveloping a new human computer interface (HCI) tool forpeople with special requirements.

II. THEORY

The first step in the classification of facial sEMG recordingis to determine the role of the facial muscles in the productionof speech. There are various speech production models thatdescribe the mechanisms of speech productions. These models

Page 2: [IEEE TENCON 2008 - 2008 IEEE Region 10 Conference (TENCON) - Hyderabad, India (2008.11.19-2008.11.21)] TENCON 2008 - 2008 IEEE Region 10 Conference - Reliability of facial muscle

Fig. 1. ICA block diagram. s(t) are the sources. x(t) are the recordings, s(t) are the estimated sources A is mixing matrix and W is un-mixing matrix

commonly describe the mouth as the audio filter where theshape of the mouth cavity and the lips modulate the air,which is a mixture of the fundamental frequencies and flatspectrum to generate the sounds. For the purpose of identifyingthe shape of the mouth and the muscle activity with speech,it is important to identify the anatomical details of speechproduction. Articulatory phonetics considers the anatomicaldetail of the production of speech sounds. For this purpose,it is convenient to divide the speech sounds into vowels andconsonants. The consonants are relatively easy to define interms of the shape and position of the vocal organs, but thevowels are less well defined and this may be explained becausethe tongue typically never touches another organ when makinga vowel [8]. When considering the speech articulation, theshapes of the mouth during speaking vowels remain constantwhile during consonants the shapes of the mouth changes.

A. Facial muscles for speech

Speech production is a result of a complex combination ofmultiple facial and other muscles and is a result of controlof these muscles, the precise anatomy of the speaker, and anactive adaptive control. When using facial sEMG to determinethe shape of the lips and the mouth, there is the issue of thechoice of the muscles and the corresponding location of theelectrodes. Face structure is more complex than the limbs, withlarge number of muscles with overlaps. It is thus difficult toidentify the specific muscles that are responsible for specificfacial actions and shapes. There is also the difficulty of crosstalk due to the overlap between the different muscles. Thisis made more complex due to the temporal variation in theactivation and deactivation of the different muscles. The useof integral of the root mean square (RMS) of sEMG is useful inovercoming the issues of cross talk and the temporal differencebetween the activation of the different muscles that may beclose to one set of electrodes. Due to the complex relationshipof the various muscles that are activated to produce a sound,

statistical distance based cluster analysis and back-propagationneural network has been used for classifying the integral ofthe RMS of the sEMG recordings.

It is impractical to consider all the facial muscles and recordtheir electrical activity. The major muscles that are responsiblefor the production of speech have thus been considered.In this study, only four facial muscles have been selectedThe Zygomaticus Major arises from the front surface of thezygomatic bone and merges with the muscles at the cornerof the mouth. The Depressor anguli oris originates from themandible and inserts skin at an angle of mouth and pulls cornerof mouth downward. Masseter originates from maxilla andzygomatic arch and inserts to ramus of mandible to elevateand protrude, assists in side-to-side movements mandible. TheMentalis originates from the mandible and inserts into the skinof the chin to elevate and protrude lower lip, pull skin into apout [9].

III. ICA FOR FACIAL SEMG

ICA is an iterative technique that estimates statisticallyindependent source signals from a given set of their linearcombinations. The process involves determining the mixingmatrix. The independent sources could be audio signals suchas speech, voice, music, or signals such as bioelectric signals.

ICA is a technique for extracting statistically independentvariables from a mixture of them. ICA searches for a lineartransformation to express a set of random variables as lin-ear combinations of statistically independent source variables[10]. The criterion involves the minimization of the mutualinformation expressed as a function of high order cumulants.ICA separates signals from different sources into distinctcomponents. The technique is based on unsupervised learningrules where reduction of mutual information and increase inGaussianity are the cost functions. Given a set of multidi-mensional observations, which are a result of linear mixingof unknown independent sources through an unknown mixing

Page 3: [IEEE TENCON 2008 - 2008 IEEE Region 10 Conference (TENCON) - Hyderabad, India (2008.11.19-2008.11.21)] TENCON 2008 - 2008 IEEE Region 10 Conference - Reliability of facial muscle

Fig. 2. General methodology block diagram

source, ICA can be employed to separate the signals from thedifferent sources. The independent sources may be sources foraudio signals such as speech, voice, music, or signals such asbioelectric signals. If the mixing process is assumed to belinear, it can be expressed as

x = As (1)

where x = (x1, x2, ..., xn) is the recordings, s =(s1, s2, ..., sn) the original signals and A is the n x n mixingmatrix of real numbers. This mixing matrix and each of theoriginal signals are unknown. To separate the recordings to theoriginal signals, an ICA algorithm performs a search of theun-mixing matrix W by which observations can be linearlytranslated to form Independent output components so that

s = Wx (2)

For this purpose, ICA relies strongly on the statisticalindependence of the sources s. This technique iterativelyestimates the un-mixing matrix using the maximization ofindependence of the sources as the cost function [10]. TheICA source separation process is explained in Fig. 1.

A number of researchers have reported the use of ICA forseparating the desired sEMG from the artefacts and from mus-cle activity from other muscles [4][6]. While details differ, thebasic technique is that different channels of sEMG recordingsare the input of ICA algorithm. The fundamental principleof ICA is to determine the un-mixing matrix and use that toseparate the mixture into the independent components. Theindependent components are computed from the linear com-bination of the recorded data. The success of ICA to separatethe independent components from the mixture depends on theproperties of the recordings. This concept of ICA is used toseparate the independent components from the facial sEMGsignal, which has a complex muscle structure. This paperreports the research conducted to evaluate the reliability offacial sEMG during vowel utterance using ICA.

IV. METHODOLOGY

Experiments were performed to determine the inter-experiment variation of the vowel based muscle activityanalysis. For this aim, the performance of the facial sEMGbased vowel speech recognition was evaluated for differentexperiments. The general block diagram of the over all processis shown in Fig. 2

A. EMG Recording and Processing of Data

The experiments were approved by the Human ExperimentsEthics Committee of the University. Experiments were con-ducted where sEMG activity of suitable facial muscles wasacquired from subjects uttering 5 vowels (a/e/i/o/u). As themuscle contraction is stationary during the utterance, RMSvalues of each of the signals for the duration of the utterancewas computed and used for further analysis. A four channel,portable, continuous recording MEGAWIN equipment (fromMEGA Electronics, Finland) was used for this purpose. Raw

Fig. 3. Points of Placement of electrodes for different facial muscles [9].

Page 4: [IEEE TENCON 2008 - 2008 IEEE Region 10 Conference (TENCON) - Hyderabad, India (2008.11.19-2008.11.21)] TENCON 2008 - 2008 IEEE Region 10 Conference - Reliability of facial muscle

Fig. 4. Placement of the Electrodes over the facial muscles during the vowelutterance

signal sampled at 2000 samples/second was recorded. The tar-get sites were cleaned with alcohol wet swabs. Ag/AgCl elec-trodes (AMBU Blue sensors from MEDICOTEST, Denmark)were mounted on appropriate locations close to the selectedfacial muscles (refer Fig. 3): the right side Zygomaticus Major,Masseter and Mentalis and left side Depressor anguli oris. Theinter electrode distance was kept constant at 1cm for all thechannels and the experiments. Controlled experiments wereconducted where the subject was asked to speak 5 Englishvowels (a/e/i/o/u/). Each vowel was spoken separately suchthat there was a clear start and end of the utterance. During thisutterance, facial sEMG from the muscles was recorded (Fig.4). sEMG from Four channels were recorded simultaneously.

B. Data Analysis

Preliminary analysis was performed using RMS, which iscommon feature for the measure of muscle activation. FourRMS values were generated for utterance of each vowelrepresenting the total muscle activity of the four muscles.Normalisation of RMS values with respect to the channel1 was performed for five vowels and ten experiments. Thenormalisation was performed to obtain the relative muscleactivity of the each utterance. Scatter plot of the normalisedRMS values is performed to visualise the formation of theclusters.

The next step was to test the reliability of facial sEMGto classify different vowel utterances using ICA. For thisanalysis 10 set of data from the experimental recordings wereconsidered. Four channel recording and four active musclescombination formed 4×4 mixing matrix. Each set of facialsEMG data were analysed using fastICA matlab package [11].The mixing matrix A was computed for the first set of dataonly. This was kept constant throughout the experiment. Theindependent sources of MUAP that mix to make the sEMGrecordings were computed using the following.

s = Wx (3)

where, W is the inverse of the mixing matrix A. This processwas repeated for each of the five vowel utterance experiments.Four sources were estimated for each experiment. An exampleof ICA source separation for four channel data is shown inFig. 5. After separating the four sources s1, s2, s3 and s4,RMS was computed for each of the separated sources usingthe following relation:

sRMS =

√√√√ 1N

N∑i=1

s2i (4)

where s are the sources and N is the number of samples.This results in one number representing the muscle activity foreach channel for each vowel utterance. RMS value of muscleactivity of each source represents the muscle activity of thatmuscle and is indicative of the strength of its contraction.

The above process was repeated for the five vowels(a/e/i/o/u). The results of this process were used to train a backpropagation neural network with 3 inputs and 3 outputs bytaking different combinations of vowels (a/i/u), (i/o/u), (a/o/u)etc. After training, the same ANN architecture was used totest the data which was not used for the training. The abilityof the network to correctly classify the inputs against knownfacial muscle activity was used to determine the efficacy ofthe technique to identify the utterance.

The quality of separation of facial muscle activities werevalidated using ICA Global matrix analysis. The sEMG sig-nals (wide-band source signals) are a linear decompositionof several narrow-band sub components: s(t) = [s1(t) +s2(t)+s3(t), . . . , sn(t)]T where s1(t),s2(t),. . . ,sn(t) each areapproximately 5000 samples in length which are obtainedfrom recorded signals x1(t),x2(t),. . . ,xn(t) using ICA. Suchdecomposition can be modelled in the time, frequency ortime frequency domains using any suitable linear transform.A set of un-mixing or separating matrices are obtained as:W1,W2,W3,. . . ,Wn where W1 is the un-mixing matrix forsensor data x1(t) and Wn is the un-mixing matrix for sensordata xn(t). If the specific sub-components of interest aremutually independent for at least two sub-bands, or moregenerally two subsets of multi-band, say for the sub band “p”and sub band “q” then the global matrix

Gpq = Wp × W−1q (5)

will be a sparse generalized permutation matrix P with specialstructure with only one non-zero (or strongly dominating)element in each row and each column [12]. This follows fromthe simple mathematical observation that in such case bothmatrices Wp and Wq represent pseudo-inverses (or true inversein the case of square matrix) of the same true mixing matrix A(ignoring non-essential and unavoidable arbitrary scaling andpermutation of the columns) and by making an assumption thatsources for two multi-frequency sub-bands are independent[12].

Page 5: [IEEE TENCON 2008 - 2008 IEEE Region 10 Conference (TENCON) - Hyderabad, India (2008.11.19-2008.11.21)] TENCON 2008 - 2008 IEEE Region 10 Conference - Reliability of facial muscle

Fig. 5. Estimated four channel source signals ˆs(t) from a four channel recording x(t) for one of the facial muscle activities using fastICA algorithm.

00.5

11.5

2

3

4

51

2

3

4

5

6

7

Massetter MuscleDAO muscle

Men

talis

Musc

le /a/

/e/

/i/

/o/

/u/

Fig. 6. 3D plot showing the normalized values of all English vowels Fromparticipant 1 recorded during different sessions

V. RESULTS AND OBSERVATIONS

A. Scatter plot of Normalized RMS values

Data point from each of the vowels were given a distinctsymbol and colour for ease of visual observation (Fig. 6).From the scatter plot, it is observed that the identification ofclusters is difficult and is not linearly separable, when thenormalised RMS features of data from different sessions wereused. This shows the classification of vowel utterance needsa better feature extraction and classification techniques. Inorder to determine the reliability of separation of these muscle

activities, ICA based analysis was used as a next step. Tovalidate the reliability of facial sEMG, different combinationsof 3 vowels as input variables was used.

B. ICA based analysis

The ICA based results demonstrates the performance of theseparation of facial muscle activities during vowel utterance.

TABLE IAVERAGE CLASSIFICATION RESULTS FOR /A/ /O/ AND /U/ VOWELS USING

ICA

Vowel Correctly Classified VowelsDay 1 Day 2

/a/ (60%) (65%)/o/ (55%) (60%)/u/ (60%) (60%)

TABLE IIAVERAGE CLASSIFICATION RESULTS FOR /E/ /I/ AND /U/ VOWELS USING

ICA

Vowel Correctly Classified VowelsDay 1 Day 2

/e/ (60%) (55%)/i/ (65%) (60%)/u/ (60%) (60%)

The result of the use of these normalized values to trainthe ANN using data from individual subjects showed easyconvergence. The results of testing the ANN to correctlyclassify the test data based on the weight matrix generatedusing the training data is tabulated in Table 1, 2 and 3 for threedifferent set of vowels. The accuracy was computed based onthe percentage of correct classified data points to the totalnumber of data points. The results indicate an overall averageaccuracy of about 60%.

Page 6: [IEEE TENCON 2008 - 2008 IEEE Region 10 Conference (TENCON) - Hyderabad, India (2008.11.19-2008.11.21)] TENCON 2008 - 2008 IEEE Region 10 Conference - Reliability of facial muscle

TABLE IIIAVERAGE CLASSIFICATION RESULTS FOR /A/ /I/ AND /U/ VOWELS USING

ICA

Vowel Correctly Classified VowelsDay 1 Day 2

/a/ (60%) (55%)/i/ (65%) (60%)/u/ (60%) (60%)

C. Validation of the results using ICA global matrix analysis

The ICA reesults (60%) were validated using Global matrixanalysis. The following results shows one of the examples ofGlobal matrix (G) for facial sEMG signals.

G =

⎛⎜⎜⎝

0.0485 -1.1738 0.0891 -1.1105-0.8019 1.0171 0.7873 0.1669−0.8377 0.0142 1.1837 -1.0169-1.4905 0.0192 -1.3557 0.4750

⎞⎟⎟⎠

Determinant of G = 0.0013

By inspecting the above matrix it is certain that the valuesare dependent (sources are dependent), cause in each row thereare more than one dominant value. To clarify this the deter-minant of the global matrix G were computed, and the resultare very close to zero which from matrix theory explains thatthe sources are dependent [13]. The above analysis validatesthe poor classification (60%) of facial muscle activities.

VI. DISCUSSION

The results demonstrated that the proposed method providesinteresting result for inter experimental variations in facialmuscle activity during different vowel utterance. The accuracyof recognition is poor when the system is used for testing thetraining network for all subjects. This has also been verifiedusing scatter plot and validated using ICA based global matrixanalysis. This shows the large variations between subjects(inter-subject variation) because of different style and speedof speaking. This method has only been tested for limitedvowels only. This is because the muscle contraction duringthe utterance of vowels is relatively stationary while duringconsonants there are greater temporal variations.

The results demonstrate that for such a system to succeed,the system needs to be improved. Some of the possibleimprovements that the authors suggest will include improvedelectrodes, site preparation, electrode location, and signalsegmentation. This current method also has to be enhancedfor large set of data with many subjects in future.

VII. CONCLUSIONS

This paper reports the reliability of facial muscle activityduring different vowel utterances. Surface EMG was usedfor determining the muscle activity. Independent componentanalysis was sued to separate the muscle activity from dif-ferent muscles. The RMS of the muscle activity was theparameter used for comparing the similarity of the muscle

activity. The results indicate that while there is a similaritybetween the muscle activities, there are inter-experimentalvariations. Normalisation of the data reduced the variation ofmagnitude of facial sEMG between different experiments. Thework indicates that people use same set of muscles for sameutterances, but there is a variation in muscle activities. It canbe used a preliminary analysis for using facial sEMG basedspeech recognition in applications in HCI.

REFERENCES

[1] H. Manabe, A. Hiraiwa, and T. Sugimura, ”Unvoiced speech recognitionusing SEMG - Mime Speech Recognition”, CHI, 2003.

[2] D. C. Chan, K. Englehart, B. Hudgins, and D. F. Lovely ”A multi-expertspeech recognition system using acoustic and myoelectric signals”, inProc. 24th Annual Conference and the Annual Fall Meeting of theBiomedical Engineering Society, 2002.

[3] S. Kumar, D. K. Kumar, M. Alemu, and M. Burry ”EMG basedvoice recognition. Intelligent Sensors”, in Proc. Sensor Networks andInformation Processing Conference, 2004.

[4] G. R. Naik, D. K. Kumar, V. P. Singh, and M. Palaniswami, ”SEMGfor Identifying hand gestures using ICA”, in proc. ICINCO workshop,2006.

[5] A. Greco, D. Costantino, F.C. Morabito, M.A. Versaci, ”A Morletwavelet classification technique for ICA filtered SEMG experimentaldata”, in Proc. International Joint Conference on Neural Networks, 20-24 July 2003, pp. 166-171.

[6] H. Nakamura, M.Yoshida, M. Kotani, K. Akazawa and T. Moritani, ”Theapplication of independent component analysis to the multi-channelsurface electromyographic signals for separation of motor unit actionpotential trains”, Journal of Electromyography and Kinesiology, vol. 14,no. 4, pp. 423-432, August 2004,

[7] H, Yong, X. Li, Y. Cao, and K. D.K Luk, ”Applying IndependentComponent Analysis on ECG Cancellation Technique for the SurfaceRecording of Trunk Electromyography”, in Proc. 27th Annual Confer-ence on IEEE Engineering in Medicine and Biology, Shanghai, China.

[8] T. W. Parsons, Voice and speech processing, 1986.[9] A. J. Fridlund and J. T. Cacioppo, ”Guidelines for Human Electromyo-

graphic research”, Journal of Psychophysiology, vol. 23, 1996.[10] A. Hyvarinen, J. Karhunen and E. Oja, Independent Component Analy-

sis, New York: John Wiley, 2001.[11] A. Hyvarinen, ”A fast and robust fixed-point algorithm for independent

component analysis”, IEEE Transaction on Neural Networks, vol. 10,no. 3, pp. 626-634, 1999.

[12] A. Cichocki, and S. Amari, Adaptive Blind Signal and Image Processing,New York: John Wiley, 2002.

[13] C. D. Meyer, Matrix Analysis and Applied Linear Algebra, UK: Cam-bridge, 2000.