RecognitionofEmotionsUsingMultichannelEEGDataand DBN-GC...

12
Research Article Recognition of Emotions Using Multichannel EEG Data and DBN-GC-Based Ensemble Deep Learning Framework Hao Chao , Huilai Zhi , Liang Dong , and Yongli Liu School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, China Correspondence should be addressed to Huilai Zhi; [email protected] Received 14 September 2018; Revised 14 November 2018; Accepted 25 November 2018; Published 13 December 2018 Academic Editor: Justin Dauwels Copyright © 2018 Hao Chao et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Fusing multichannel neurophysiological signals to recognize human emotion states becomes increasingly attractive. e con- ventional methods ignore the complementarity between time domain characteristics, frequency domain characteristics, and time- frequency characteristics of electroencephalogram (EEG) signals and cannot fully capture the correlation information between different channels. In this paper, an integrated deep learning framework based on improved deep belief networks with glia chains (DBN-GCs) is proposed. In the framework, the member DBN-GCs are employed for extracting intermediate representations of EEG raw features from multiple domains separately, as well as mining interchannel correlation information by glia chains. en, the higher level features describing time domain characteristics, frequency domain characteristics, and time-frequency char- acteristics are fused by a discriminative restricted Boltzmann machine (RBM) to implement emotion recognition task. Ex- periments conducted on the DEAP benchmarking dataset achieve averaged accuracy of 75.92% and 76.83% for arousal and valence states classification, respectively. e results show that the proposed framework outperforms most of the above deep classifiers. us, potential of the proposed framework is demonstrated. 1. Introduction Emotion plays an important role in the daily life of human beings. Especially, peoples communicate more easily through emotional expression, and different emotional states can affect people’s learning, memory, and decision- making. erefore, recognition of different emotional states has wide application prospects in the fields of distance education, medicine, intelligent system, and human- computer interaction. Emotion recognition has recently been highly valued by researchers and has been one of the most important issues [1]. Emotional recognition can be performed by external features such as facial expressions and voice intonation [2–6]. It can also be performed according to changes of physiological signals such as electroencephalogram. Com- pared with physiological signals, facial/vocal expressions are easily affected by the external environment and the pa- rameters easily vary in different situations. However, the emotion recognition results from EEG signals are relatively objective due to the fact that physiological signals are hard to be camouflaged. erefore, the studies of associations be- tween EEG activity and emotions have received much at- tention [7–9]. Emotion recognition is essentially a pattern recognition task, and one of the key steps is extracting the emotion- related features from the multichannel EEG signals. Var- ious EEG features in time domain, frequency domain, and time-frequency domain have been proposed in the past. Time-domain features from EEG can identify character- istics of time series that vary between different emotional states. e statistical parameters of EEG series, such as mean, standard deviation, and power, were usually employed [10–12]. Frantzidis et al. used amplitude and latency of event-correlated potentials (ERPs) as features for emotion recognition [13]. In addition, Hjorth features [14, 15], nonstationary index [16], and higher order crossing features [17, 18] have also been utilized. Power features from different frequency bands from EEG signals are most popular in frequency domain techniques. e Hindawi Computational Intelligence and Neuroscience Volume 2018, Article ID 9750904, 11 pages https://doi.org/10.1155/2018/9750904

Transcript of RecognitionofEmotionsUsingMultichannelEEGDataand DBN-GC...

Page 1: RecognitionofEmotionsUsingMultichannelEEGDataand DBN-GC …downloads.hindawi.com/journals/cin/2018/9750904.pdf · 2019-07-30 · types of deep learning approaches, stacked denoising

Research ArticleRecognition of Emotions Using Multichannel EEG Data andDBN-GC-Based Ensemble Deep Learning Framework

Hao Chao Huilai Zhi Liang Dong and Yongli Liu

School of Computer Science and Technology Henan Polytechnic University Jiaozuo China

Correspondence should be addressed to Huilai Zhi zhihuilai126com

Received 14 September 2018 Revised 14 November 2018 Accepted 25 November 2018 Published 13 December 2018

Academic Editor Justin Dauwels

Copyright copy 2018 Hao Chao et al is is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Fusing multichannel neurophysiological signals to recognize human emotion states becomes increasingly attractive e con-ventional methods ignore the complementarity between time domain characteristics frequency domain characteristics and time-frequency characteristics of electroencephalogram (EEG) signals and cannot fully capture the correlation information betweendifferent channels In this paper an integrated deep learning framework based on improved deep belief networks with glia chains(DBN-GCs) is proposed In the framework the member DBN-GCs are employed for extracting intermediate representations ofEEG raw features from multiple domains separately as well as mining interchannel correlation information by glia chains enthe higher level features describing time domain characteristics frequency domain characteristics and time-frequency char-acteristics are fused by a discriminative restricted Boltzmann machine (RBM) to implement emotion recognition task Ex-periments conducted on the DEAP benchmarking dataset achieve averaged accuracy of 7592 and 7683 for arousal andvalence states classification respectively e results show that the proposed framework outperforms most of the above deepclassifiers us potential of the proposed framework is demonstrated

1 Introduction

Emotion plays an important role in the daily life of humanbeings Especially peoples communicate more easilythrough emotional expression and different emotionalstates can affect peoplersquos learning memory and decision-making erefore recognition of different emotional stateshas wide application prospects in the fields of distanceeducation medicine intelligent system and human-computer interaction Emotion recognition has recentlybeen highly valued by researchers and has been one of themost important issues [1]

Emotional recognition can be performed by externalfeatures such as facial expressions and voice intonation[2ndash6] It can also be performed according to changes ofphysiological signals such as electroencephalogram Com-pared with physiological signals facialvocal expressions areeasily affected by the external environment and the pa-rameters easily vary in different situations However theemotion recognition results from EEG signals are relatively

objective due to the fact that physiological signals are hard tobe camouflaged erefore the studies of associations be-tween EEG activity and emotions have received much at-tention [7ndash9]

Emotion recognition is essentially a pattern recognitiontask and one of the key steps is extracting the emotion-related features from the multichannel EEG signals Var-ious EEG features in time domain frequency domain andtime-frequency domain have been proposed in the pastTime-domain features from EEG can identify character-istics of time series that vary between different emotionalstates e statistical parameters of EEG series such asmean standard deviation and power were usuallyemployed [10ndash12] Frantzidis et al used amplitude andlatency of event-correlated potentials (ERPs) as features foremotion recognition [13] In addition Hjorth features[14 15] nonstationary index [16] and higher ordercrossing features [17 18] have also been utilized Powerfeatures from different frequency bands from EEG signalsare most popular in frequency domain techniques e

HindawiComputational Intelligence and NeuroscienceVolume 2018 Article ID 9750904 11 pageshttpsdoiorg10115520189750904

EEG power spectral density (PSD) in alpha (8ndash13Hz) bandwas reported to be significantly correlated with the level ofvalence [19] e EEG PSDs in delta (1ndash4Hz) and theta(4ndash7Hz) extracted from three central channels also containsalient information related to both arousal and valencelevels [13] Based on time-frequency analysis theHilbertndashHuang spectrum (HHS) [20] and discrete wavelettransform method [21 22] were proposed in the emotionclassification tasks e above research shows that timedomain characteristics frequency domain characteristicsand time-frequency characteristics of EEG signals canprovide salient information related to emotional statesseparately

Usually machine learning methods are used to es-tablish emotion recognition models Samara et al fusedstatistical measurements band power from the β δ and θwaves and high-order crossing of the EEG signal byemploying a support vector machine (SVM) as the classifier[23] Jadhav et al proposed a novel technique for EEG-based emotion recognition using gray-level co-occurrencematrix- (GLCM-) based features and k-nearest neighbors(KNN) classifier [24] ammasan et al applied threecommonly used algorithms to classify emotional classes asupport vector machine based on the Pearson VII kernelfunction (PUK) kernel a multilayer perceptron (MLP) withone hidden layer and C45 [25] Recently various deeplearning (DL) approaches were investigated for EEG-basedemotion classification e standard deep belief networks(DBNs) were employed by Wand and Shang to extractfeatures from raw physiological data to recognize the levelsof arousal valence and liking [26] In reference [27] twotypes of deep learning approaches stacked denoisingautoencoder and deep belief networks were applied asfeature extractors for the affective states classificationproblem using EEG signals Li et al designed a hybrid deeplearning model that combines the convolutional neuralnetwork (CNN) and recurrent neural network (RNN) toextract EEG features [28]

Compared with the traditional machine learningmethods DL has achieved the promising results Howeverthere still exist two challenges in the multichannel EEGsignals based emotion recognition Firstly seeing that timedomain characteristics frequency domain characteristicsand time-frequency characteristics of EEG signals containsalient information related to emotional states naturally thecomplementarity between different types of features derivedfrom these domain characteristics respectively is consid-ered us feature extraction and feature fusion of multi-channel EEG signals in time domain frequency domain andtime-frequency domain need to be investigated to achievebetter performance Generally a simple deep model such asDBN or CNN can abstract the intermediate representationsof multichannel EEG features and achieve feature fusion atthe feature level [29] Nevertheless in view of the fact of thehigh dimensionality and limited training samples of thephysiological data too many nodes in each layer of the deepnetwork will lead to the model overfitting problem Sec-ondly capturing the correlation information betweendifferent channels of EEG signal and extracting depth

correlation feature which are ignored by the researchersneeds to be taken into consideration when performingfeature fusion using the deep model

To address the two issues mentioned above an in-tegrated deep learning framework composed of DBN-GC isproposed in this paper As a special nerve cell in humanbrain glia cell can transmit signals to neurons and other gliacells erefore researchers paid attention to the charac-teristics of glia cells and applied it to the artificial neuralnetworks [30 31] In the framework raw multidomainfeatures are obtained from multichannel EEG signals enthe intermediate representations of the raw multidomainfeatures are separately extracted by member DBN-GC inwhich glia chains work for mining interchannel correlationand help to optimize learning process Finally a discrimi-native RBM is used to obtain the emotion predictions In theexperiment the effectiveness of our method is validated onthe multichannel EEG data in DEAP dataset which is awidely used for emotion recognition

e rest of this paper is organized as follows A detaileddescription of the proposed deep learning framework basedon DBN-GC is presented in Section 2 e experimentalresults and discussions are reported in Section 3 e lastSection 4 briefly concludes the work

2 Methods

21 Database In this research the DEAP dataset is used foremotion analysis DEAP is an open source dataset developedby the research team at Queenrsquos University in Marie London[32] It mainly recorded the multimodal physiological signalsproduced by 32 volunteers under the stimulus of the selectedvideose multimodal physiological signals include the EEGand peripheral physiological signals Each volunteer needed towatch 40 one-minute long videos While each video waspresented the EEG and peripheral physiological signals ofvolunteers were recorded synchronously It should be notedthat the EEG was recorded from 32 sites (Fp1 AF3 F3 F7FC5 FC1 C3 T7 CP5 CP1 P3 P7 PO3 O1 Oz Pz Fp2AF4 Fz F4 F8 FC6 FC2 Cz C4 T8 CP6 CP2 P4 P8 PO4and O2) Finally the subjective-ratings of arousal valenceliking and dominance on a scale of 1ndash9 were provided In thisstudy we only focus on arousal and valence scales us atwo-dimensional emotional model (illustrated in Figure 1)can be built where the two dimensions are arousal and va-lence respectively We divide and label the trials into twoclasses for valence and arousal respectively (pleasant gt5unpleasant le5 aroused gt5 relaxed le5)

22DataPreprocessingandFeatureExtraction In this studyonly EEG signals are employed for the emotion recogni-tion EEG signals recorded with 512Hz sampling frequencyare downsampled to 128Hz en filtering is implementedby a band-pass filter with cutoff frequencies of 40 and450 Hz

In order to make full use of the salient informationregarding the emotional states in EEG signals four types ofraw features which characterizing the information in time

2 Computational Intelligence and Neuroscience

domain frequency domain and time-frequency domainrespectively are extracted from the EEG signals [32ndash36] andthe detailed description is shown in Table 1 e 14 EEGchannel pairs for achieving power dishyerences include Fp2-Fp1 AF4-AF3 F4-F3 F8-F7 FC6-FC5 FC2-FC1 C4- C3T8-T7 CP6-CP5 CP2-CP1 P4-P3 P8-P7 PO4-PO3 andO2-O1 us the dimension of the feature vector for oneinstance is 664 and the label for each instance is 2-dimensional

For one volunteer the size of the corresponding datamatrix is 40 times 664 (videosinstances times features) For all 32volunteers 40 times 32 1280 instances are available ecorresponding data matrix of each volunteer was stan-dardized to remove the dishyerence in feature scales

23 Ensemble Deep Learning Framework

231 Improved DBN with Glia Chains For emotion rec-ognition tasks deep learning methods hypothesize that ahierarchy of intermediate representations from the EEGraw features is necessary to characterize the underlyingsalient information related to dishyerent emotional statesDeep belief network which is composed of many restrictedBoltzmann machines in the stacking way has the strongability to learn high-level representations beneting from adeep structure-based learning mechanism with multiplehidden layers

As shown in Figure 2 the output of the rst RBM is usedas the input of the second RBM Similarly the third RBM istrained on the output of the second RBMrough this waya deep hierarchical model can be constructed that learnsfeatures from low-level features to obtain the high-levelrepresentation

In view of the fact that there are no interconnectionsamong the neural units of DBN in the same layer it is hard toexploit the mutual information of dishyerent neural units inthe same layer is means that DBN is hard to work formining interchannel and interfrequency correlation in-formation from multichannel EEG signals in the emotionrecognition tasks Considering this an improved DBN withglia chains is introduced in this paper

e structure of DBN-GC can be seen from Figure 3 Inaddition to the two level units of each RBM there is a group ofglia cells represented by stars and linked into a chain structureEach glia cell is also connected to a unit in the hidden layer ofRBM as shown in Figure 4 ere is no weight between theglia cells and the corresponding hidden units e eshyect of allglia cells in the training process can be directly applied to thehidden units and the outputs of the hidden layer nodes can beadjusted accordingly rough the connection of glia cellseach glia cell can also transmit activated signal to other gliacells and adjust the glia eshyect of other glia cells

Table 1 Detailed description of raw EEG features

Index Type Domain Notations of theextracted features

No1ndash128

Statisticalmeasures

Timedomain

Mean variance zero-crossing rate and

approximate entropy of32 EEG channels

(4 features times 32 channels)

No129ndash288

Powerfeatures

Frequencydomain

Average PSD in theta(4ndash8Hz) slow-alpha

(8ndash10Hz) alpha (8ndash12Hz)beta (12ndash30Hz) and gamma(30ndash45Hz) bands for all EEG

channels (5 power times 32channels)

No289ndash344

Powerdishyerences

Frequencydomain

Dishyerence of average PSD intheta alpha beta and gammabands for 14 EEG channelpairs between right and leftscalp (4 power dishyerences times

14 channel pairs)

No345ndash664

HHSfeatures

Time-frequencydomain

Average values of squaredamplitude and instantaneousfrequency of HHS-based

timendashfrequencyrepresentation in delta(1ndash4Hz) theta (4ndash8Hz)alpha (8ndash12Hz) beta

(12ndash30Hz) and gamma(30ndash45Hz) bands for all EEGchannels (2 features times 5 bands

times 32 channels)

Valence

Arousal

Positive

Negative

HighLow

Peaceful Happiness

Sadness Angry

Neutral

Figure 1 Arousal-valence plane

Visible layers

Hiddenlayers

RBM1

RBM2

RBM3

RBM4

Figure 2 Structure of deep belief network

Computational Intelligence and Neuroscience 3

For example if the output of a hidden unit h1 is higherthan the prespecied threshold the corresponding glia cellg1 will be activated and then a signal is transmitted to theglia cell g2 When the signal is passed to g2 the glia cell g2will be activated no matter whether or not the output of thehidden unit h2 reached the prespecied threshold en gliacell g2 will produce the second signal to spread Meanwhilethe signal generated by g1 will continue to spread In orderto simplify the calculation all signals produced by glia cellsare propagated along the specic direction of the glia chainat is the signals are transmitted from the rst glia cell onthe chain to the last

For RBM with a glia chain the output rule of hiddenunits is updated as follows

hj σ hlowastj + α middot gj( ) (1)

where hlowastj is the output value of the hidden node j before theoutput rule is updated gj is the glia eshyect value of thecorresponding glia cell α is the weight coeiexclcient of gliaeshyect value and σ() is the sigmoid function e weightcoeiexclcient α is set manually which can control the eshyect ofglia eshyect on the hidden units hlowastj can be calculated as

hlowastj sumi

Wijvi + cj (2)

where Wij is the connection weight of the visual unit i andthe hidden unit j vi is the state value of the visible unit i andcj is the bias value of the hidden unit j Instead of randomsampling activation probability is employed as output foreach hidden unit which can reduce sampling noise andspeed up learning

e glia eshyect value of glia cell gj is dened as

gj(t) 1 hlowastj gt θcupgjminus1(tminus 1) 1( )cap tj ltT( )βgj(tminus 1) others

(3)

where θ is the prespecied threshold T is an unresponsivetime threshold after activation and β represents the at-tenuation factor Every time the signal produced by anactivated glia cell is passed to the next glia celle activationof a glia cell will depend on whether the output of thecorresponding hidden unit reaches the prespeciedthreshold θ or whether the previous glia cell conveys a signalto it Meanwhile the dishyerence tj between its last activationtime and the current time must be less than T If the glia cellis activated it will transmit a signal to the next glia cellotherwise it will not produce signals and its glia eshyect willgradually decay

After integrating the glia cell mechanism the learningalgorithm of RBM is improved and the pseudocodes of thelearning algorithm are listed in Algorithm 1

e training process of a DBN-GC which is similar tothat of DBN consists of 2 steps pretraining and ne-tuningGlia cell mechanism only acts on the pretraining process Inthe pretraining phase a greedy layer-wise unsupervisedmethod is adopted to train each RBM and the hidden layerrsquosoutput of the previous RBM is used as the visible layerrsquosinput of the next RBM In the ne-tuning phase backpropagation is performed to ne-tune the parameters of theDNB-GC

232 DBN-GC-Based Ensemble Deep Learning ModelConsidering that the raw EEG features in Table 1 may sharedishyerent hidden properties across dishyerent time domain andfrequency domain modalities we proposed a DBN-GC-based ensemble deep learning model which implements aDBN-GC-based network on homogenous feature subsetindependently e feature vectors derived from dishyerentfeature subsets are fed into the corresponding DBN-GCrespectively and the higher feature abstractions of eachfeature subset are obtained as the outputs of the last hiddenlayer in the corresponding DBN-GC en a discriminativeRBM is built upon the combined higher feature abstractionse network architecture is illustrated in Figure 5 eensemble deep learning model consists of three parts theinput layer ve parallel DBN-GCs and a discriminativeRBM

e overall raw EEG feature set in Table 1 can be denedas F0 which is split into ve nonoverlapped physiologicalfeature subsets F1 F2 F3 F4 and F5 e statistical measuresfrom time domain construct subset F1 and the multichannelEEG PSDs construct subset F2 Another subset F3 is built byEEG power dishyerences In view of the heterogeneity ofmultichannel HHS features in time domain and frequencydomain the HHS features can be grouped into two subsetsF4 and F5 indicating squared amplitude and instantaneousfrequency respectively

Dene feature vectors formed by the feature subset Fi asx(Fi) e features in F0 form the input vector x(F0) of the

RBM withglia chainVisible

layer

Hiddenlayer

h1

v1

g1

vi

hjgj

vn

hngm

Figure 4 Structure of RBM with glia chain

RBM with glia chain

RBM with glia chain

RBM with glia chain

RBM with glia chain

Hiddenlayers

Visible layer

Figure 3 Structure of deep belief network with glia chains

4 Computational Intelligence and Neuroscience

ensemble deep learning framework which is fed into theinput layer en the input vector x(F0) is split into fivesubvectors x(F1) x(F2) x(F3) x(F4) and x(F5) e fivesubvectors are the input to the corresponding DBN-GCrespectively e five DBN-GC based deep models are builtfor learning the hidden feature abstractions of each raw EEGfeature subset and the hidden feature abstractions are de-scribed as s1(x(F1)) s2(x(F2)) s3(x(F3)) s4(x(F4)) ands5(x(F5)) si(x(Fi)) is the output vector of the last hidden layerof the corresponding DBN-GC i en s1(x(F1)) s2(x(F2))s3(x(F3)) s4(x(F4)) and s5(x(F5)) are merged into a vectorwhich is fed into the discriminative RBM to recognizeemotion states

When building the DBN-GC-based ensemble deeplearning model the five DBN-GCs are trained firstly Anadditional two-neuron output layer that corresponds tobinary emotions is added when training each DBN-GCen the discriminative RBM is built upon the combinedhigher feature abstractions derived from the five DBN-GCsTo determine the DBN-GCsrsquo hyperparameters differentcombinations of hyperparameters are tested and the pa-rameter combination with the minimal recognition error isadopted

3 Results and Discussion

In view of the limited sample size in the dataset cross-validation techniques are adopted in the experiments eensemble deep learning model is trained and tested via 10-fold cross-validation technique with a participant-specificstyle For each of the 32 volunteers the corresponding 40instances are divided into 10 subsets 9 subsets (36 instances)are assigned to the training set and the remaining 1 (4

instances) is assigned to the test set e above process isrepeated 10 times until all subsets are tested

31 Comparison between DNB and DBN-GC In order tostudy the learning performance of DBN-GC we first usethree feature subsets (F6 F7 and F8) to train DBNs andDBN-GCs respectively e three subsets are given as fol-lows F6 F1 F7 F2 cup F3 and F8 F4 cup F5e three featuresubsets represent time domain characteristics frequencydomain characteristics and time-frequency characteristicsrespectively A DBN and a DBN-GC are trained by the samefeature subset and they have the same hyperparameters asshown in Table 2 In addition the parameters of the DBNand the DBN-GC which share the same feature subset suchas learning rate are all set to the same value e six modelsperform the same emotion recognition task and the metricsfor recognition performance adopt accuracy and F1-score

e detailed recognition performance comparisons onarousal and valence dimensions are illustrated in Figure 6

Each column represents the statistical results of 32participants Figures 6(a) and 6(c) show the classificationaccuracy and F1-score of arousal dimension Figures 6(b)and 6(d) show the classification accuracy and F1-score ofvalence dimension As we can see from Figure 6 no matterwhich feature subset is used the DBN-GC model greatlyoutperforms the corresponding baseline DNN model with ahigher median of accuracy and F1-score and a relatively lowstandard deviation In the three DBN-GC models DBN7which is built by the feature subset F7 achieves the highestrecognition accuracy (07242 on arousal and 07310 onvalence) and F1-score (06631 on arousal and 06775 onvalence) e results show that the frequency domain

Input a training sample x1 learning rate ε glia effect vector gOutput weight matrix W bias vector of the visible layer b bias vector of the hidden layer cStart trainingInitialize v(0) v(0) x1 (initialize state value of the visible layer v)for j 1 m (for all hidden units)

p (hj(0) 1 | v(0)) σ (1113944i

Wijv(0)i + cj + cj)

End forExtract a sample h(0) according to h(0)simp (h(0) | v(0))for i 1 n (for all visible units)

p (v(1)i 1 | h(0)) σ (ΣjWij hj(0) + bi)

End forExtract a sample v(1) according to v(1)simp (v(1) | h(0))for j 1 m (for all hidden units calculate the output value without glia effect)hj(1)lowast ΣiWij vi

(1) + cjEnd forupdate the glia effect vector gfor j 1 m (for all hidden units calculate the output value with glia effect)

hj(1) σ (hj

(1)lowast + α middot gj)End for

Update parametersW⟵ W + ε (p (h(0) 1 | v(0))v(0)T minus p(h(1) 1 | v(1))v(1)T)b⟵ b + ε (v(0) minus v(1))c⟵ c + ε (p (h(0) 1 | v(0)) minus p (h(1) 1 | v(1)))

ALGORITHM 1 Pseudocodes for training the RBM with glia chain

Computational Intelligence and Neuroscience 5

characteristics of EEG signals can provide salient in-formation regarding emotion states

e results validate that the glia chain can improve thelearning performance of the deep structurerough the gliachains the hidden layer units in DBN-GC can transferinformation to each other and the DBN-GC model canobtain the correlation information between the same hiddenlayer units us the improved DNB model can learn morediscriminative features For EEG-based emotion recognitiontask the DNN-GC can mine interchannel correlation andutilize interchannel information which is often ignored byother emotion recognition studies

32 Results of the DBN-GC-Based Ensemble Deep LearningModel en the proposed DBN-GC-based ensemble deeplearning model is employed to perform the emotion rec-ognition task Each parallel DBN-GC in the ensemble deeplearning model has 3 hidden layers e numbers of hiddenneuron of each parallel DBN-GC are listed in Table 3rough the ve parallel DBN-GCs the samplesrsquo featuredimensionality is reduced from 664 to 350

Table 4 compares the average recognition performancebetween the proposed DBN-GC-based ensemble deeplearning model and several deep learning-based studies onthe same database Specically Tripathi et al divided the

Input layer Parallel DBN-GCs Discriminative RBM

v1 h1 h2 hn v1

h1

Label

x (F0)

DBN-GC 1

DBN-GC 2

DBN-GC 5

x (F1)

x (F2)

x (F5)

s1 (x(F1))

s2 (x(F2))

s5 (x(F5))

Figure 5 Architecture of DBN-GC-based ensemble deep learning model

Table 2 Hyperparameters of DBNs and DBN-GCs

F6 F7 F8DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC8

v1 128 128 216 216 320 320h1 100 100 200 200 300 300h2 80 80 135 135 280 280h3 45 45 90 90 125 125o1 2 2 2 2 2 2Note v1 represents the input layer o1 represents the output layer and hi represents the i-th hidden layer

6 Computational Intelligence and Neuroscience

8064 readings per channel into 10 batches For each batch 9characteristics such as mean and variance were extracteden a deep neural network and a convolutional neural

network were used for emotion recognition [29] Li et alproposed the frame-based EEG features through wavelet andscalogram transform and designed a hybrid deep learningmodel which combined CNN and RNN [37] In addition Liet al also trained a two-layer DBN to extract high-levelfeatures for each channel and then a SVM with RBF kernelis employed as the classier [38] Wang and Shang presentedthe DBN-based system that extracted features from rawphysiological data and 3 classiers were built to predictemotion states [26] In view of that the above studies did notintroduce F1-score as the metrics for recognition perfor-mance the average recognition performance of the proposedmodel is also compared with that of reference [32] In thisreference Koelstra et al analyzed the central nervous system(CNS) peripheral nervous system (PNS) and multimediacontent analysis features for emotion recognition Consid-ering the proposed DBN-GC-based method in this paper isbased on the EEG signal Table 4 only lists the recognitionresults of the CNS feature-based single modality in reference[32] e DEAP dataset is used in all references in Table 4and the trials are divided into two classes for valence andarousal respectively (ratings divided as more than 5 and lessthan 5) in all references in Table 4

As can be seen from Table 4 the performance of theDBN-GC-based ensemble deep learning model regardingthe recognition accuracy outperforms most of the abovedeep classiers Meanwhile the F1-scores achieved by theproposed model are obviously superior to 05830 and 05630reported by reference [32] e proposed method provides07683 mean recognition accuracy (MRA) on valence whichis lower than the highest MRA reported on valence (08141)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1Ac

cura

cy (a

rous

al)

(a)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

Accu

racy

(val

ence

)

(b)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

F1 sc

ore (

arou

sal)

(c)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

F1 sc

ore (

vale

nce)

(d)

Figure 6 Comparison of emotion recognition results between DBN and DBN-GC

Table 3 Hyperparameters of each parallel DBN-GC in the en-semble deep learning model

h1 h2 h3DBN-GC1 110 105 90DBN-GC2 145 145 120DBN-GC3 55 55 45DBN-GC4 120 55 45DBN-GC5 155 80 50Sum 585 440 350

Table 4 Average recognition performance comparison betweenthe DBN-GC-based ensemble deep learning model and severalreported studies

Arousal Valence

Accuracy F1-score Accuracy F1-

scoreDeep neural network [29] 07313 mdash 07578 mdashConvolutional neuralnetwork [29] 07336 mdash 08141 mdash

CNNRNN [37] 07412 mdash 07206 mdashDBN-SVM [37] 06420 mdash 05840Wang and Shang [26] 05120 mdash 06090 mdashCNS feature-based singlemodality [32] 06200 05830 05760 05630

DBN-GC-based ensembledeep learning model 07592 06931 07683 07015

Computational Intelligence and Neuroscience 7

is may be due to that the CNN model proposed by Tri-pathi et al benets from suiexclcient training and validatinginstances In addition the performance of the ensemble deeplearning model also outperforms the three DBN-GCs inTable 3 which are trained by a single-feature subset isindicates that time domain characteristics frequency do-main characteristics and time-frequency characteristics ofEEG signals should be complementary in emotion recog-nition and the proposed method can integrate dishyerenttypes of characteristics eshyectively

33 Parameter Selection Each parallel DBN-GC in theproposed ensemble deep learning model contains threeimportant parameters the weight coeiexclcient of glia eshyectvalue α the attenuation factor β and the glia threshold θese three parameters will determine the eshyect of glia cellson DBN and then ashyect the performance of the proposedmodel Since there is no self-adaptive adjustment methodthe three parameters are set manually and take 20 values inthe range of 0 to 1 respectively with the interval 005 Underthe dishyerent values of each parameter the MRA on valence

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Weight coefficient of glial effect

ArousalValence

Figure 7 MRA of the proposed ensemble deep learning model with dishyerent values of glia eshyect weight

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Attenuation factor

ArousalValence

Figure 8 MRA of the proposed ensemble deep learning model with dishyerent values of attenuation factor

8 Computational Intelligence and Neuroscience

and the MRA on arousal are taken as the nal evaluationbasis and the analysis results are shown in Figures 7ndash9

As can be seen from Figure 7 when the glia eshyect weightis between 005 and 1 the MRA on arousal as well as theMRA on valence copyuctuates continuously e highest MRAon arousal (7612) is obtained as the glia eshyect weight is setto 075 For MRA on valence the higher values will appearwhen the weight value is close to 015 or 080 Taking intoaccount these two indicators simultaneously it is appro-priate to set the weight coeiexclcient to 080

Figure 8 shows the results of arousal classication andvalence classication with dishyerent values of the attenuationfactor When the value of attenuation factor is between 005and 035 the MRA on arousal as well as the MRA on valencecopyuctuates greatly With the attenuation factor increased to04 both the MRA on arousal and the MRA on valenceincrease rapidly Once the attenuation factor exceeds 05 thetwo MRAs have been decreasing slowly us it is appro-priate to set the attenuation factor to 040 or 050

Figure 9 shows the results of arousal classication andvalence classication with dishyerent values of glia thresholdAlthough the highest value of MRA on arousal occurs withthe glia threshold set to 035 the MRA is more stable whenthe glia threshold is between 065 and 1 For the MRA onvalence its value has been rising slowly when the attenuationfactor exceeds 025 It is appropriate that the glia threshold iswithin the range of 070 to 080

4 Conclusions

In this paper we presented an ensemble deep learning modelwhich integrates parallel DBN-GCs and a discriminativeRBM for emotion recognition e interchannel correlationinformation from multichannel EEG signals which isoften neglected contains salient information regarding to

emotion states and the chain structure of glia cells in DBN-GC has the ability in mining interchannel correlation in-formation In addition the time domain characteristicsfrequency domain characteristics and time-frequencycharacteristics of EEG signals should be complementaryfor emotion recognition and the ensemble deep learningframework benets from the comprehensive fusion ofmultidomain feature abstractions e reliability of theDBN-GC and the ensemble deep learning framework-basedfusion methods is validated by the experiments based onDEAP database

Data Availability

e DEAP dataset used in our manuscript is a dataset foremotion analysis using electroencephalogram (EEG) andphysiological and video signals e DEAP dataset isavailable at httpwwweecsqmulacukmmvdatasetsdeap Anyone interested in using this dataset will have toprint sign and scan an EULA (end-user license agreement)and return it via e-mail en a username and password todownload the data will be provided e dataset was rstpresented in reference [32] DEAP a database for emotionanalysis using physiological signals

Conflicts of Interest

e authors declare that there are no concopyicts of interestregarding the publication of this paper

Acknowledgments

is paper was supported by the China National NatureScience Foundation (Nos 61502150 and 61300124) Foun-dation for University Key Teacher by Henan Province underGrant no 2015GGJS-068 Fundamental Research Funds for

ArousalValence

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Glia threshold

Figure 9 MRA of the proposed ensemble deep learning model with dishyerent values of glia threshold

Computational Intelligence and Neuroscience 9

the Universities of Henan Province under Grant noNSFRF1616 Foundation for Scientific and TechnologicalProject of Henan Province under Grant no 172102210279and Key Scientific Research Projects of Universities inHenan (No 19A520004)

References

[1] M-F Pedro M-P Arquimedes J-C Antoni andJ M B Rubio ldquoEvaluating the research in automatic emotionrecognitionrdquo IETE Technical Review vol 31 no 3 pp 220ndash232 2014

[2] M Chen X He J Yang and H Zhang ldquo3-D convolutionalrecurrent neural networks with attention model for speechemotion recognitionrdquo IEEE Signal Processing Letters vol 25no 10 pp 1440ndash1444 2018

[3] Z-T Liu Q Xie M Wu W-H Cao Y Mei and J-W MaoldquoSpeech emotion recognition based on an improved brainemotion learning modelrdquo Neurocomputing vol 309pp 145ndash156 2018

[4] C-N Anaqnostopoulos T Iliou and I Giannoukos ldquoFea-tures and classifiers for emotion recognition from speech asurvey from 2000 to 2011rdquo Artificial Intelligence Reviewvol 43 no 2 pp 155ndash177 2015

[5] S-E Kahou V Michalski K Konda et al ldquoRecurrent neuralnetworks for emotion recognition in videordquo in Proceedings of2015 ACM International Conference onMultimodal Interactionpp 467ndash474 ACM Seattle WC USA November 2015

[6] Y Huang J Yang P Liao and J Pan ldquoFusion of facial ex-pressions and EEG for multimodal emotion recognitionrdquoComputational Intelligence and Neuroscience vol 2017 Ar-ticle ID 2107451 8 pages 2017

[7] X-W Wang D Nie and B-L Lu ldquoEmotional state classi-fication from EEG data using machine learning approachrdquoNeurocomputing vol 129 pp 94ndash106 2014

[8] H J Yoon and S Y Chung ldquoEEG-based emotion estimationusing Bayesian weighted-log-posterior function and percep-tron convergence algorithmrdquo Computers in Biology andMedicine vol 43 no 12 pp 2230ndash2237 2013

[9] A-N Abeer H Manar A-O Yousef and A-W AreejldquoReview and classification of emotion recognition based onEEG brain-computer interface system research a systematicreviewrdquo Applied Sciences-Basel vol 7 no 12 p 1239 2017

[10] Y Liu and O Sourina ldquoReal-time fractal-based valence levelrecognition from EEGrdquo in Transactions on ComputationalScience XVIII vol 7848 pp101ndash120 Springer BerlinGermany 2013

[11] M Murugappan N Ramachandran and Y Sazali ldquoClassi-fication of human emotion from EEG using discrete wavelettransformrdquo Journal of Biomedical Science and Engineeringvol 3 no 4 pp 390ndash396 2010

[12] T-Y Chai W-S San C-S Tan andM Rizon ldquoClassificationof human emotions from EEG signals using statistical featuresand neural networkrdquo International Journal of IntegratedEngineering vol 1 no 3 pp 1ndash6 2009

[13] C Frantzidis C Bratsas C Papadelis E KonstantinidisC Pappas and P D Bamidis ldquoToward emotion awarecomputing an integrated approach using multichannelneurophysiological recordings and affective visual stimulirdquoIEEE Transactions on Information Technology in Biomedicinevol 14 no 3 pp 589ndash597 2010

[14] B Hjorth ldquoEEG analysis based on time domain propertiesrdquoElectroencephalography and Clinical Neurophysiology vol 29no 3 pp 306ndash310 1970

[15] K Ansari-asl G Chanel and T Pun ldquoA channel selectionmethod for EEG classification in emotion assessment basedon synchronization likelihoodrdquo in Proceedings of 15th Eu-ropean Signal Processing Conference pp 1241ndash1245 SpringerPoznan Poland September 2007

[16] E Kroupi A Yazdani and T Ebrahimi ldquoEEG correlates ofdifferent emotional states elicited during watching musicvideosrdquo in Proceedings of 4th International Conference onAffective Computing and Intelligent Interaction pp 457ndash466Springer Memphis TN USA October 2011

[17] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from EEG using higher order crossingsrdquo IEEETransactions on Information Technology in Biomedicinevol 14 no 2 pp 186ndash197 2010

[18] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from brain signals using hybrid adaptive filtering andhigher order crossings analysisrdquo IEEE Transactions on Af-fective Computing vol 1 no 2 pp 81ndash97 2010

[19] G K Verma and U S Tiwary ldquoMultimodal fusion frame-work a multiresolution approach for emotion classificationand recognition from physiological signalsrdquo NeuroImagevol 102 pp 162ndash172 2014

[20] S K Hadjidimitriou and L J Hadjileontiadis ldquoToward anEEG-based recognition of music liking using time-frequencyanalysisrdquo IEEE Transactions on Biomedical Engineeringvol 59 no 12 pp 3498ndash3510 2012

[21] M Akin ldquoComparison of wavelet transform and FFTmethodsin the analysis of EEG signalsrdquo Journal of Medical Systemsvol 26 no 3 pp 241ndash247 2002

[22] M Murugappan M Rizon S Yaacob et al ldquoEEG featureextraction for classifying emotions using FCM and FKMrdquoInternational Journal of Computers and Communicationsvol 1 no 2 pp 21ndash25 2007

[23] A Samara M-L-R Menezes and L Galway ldquoFeature ex-traction for emotion recognition and modeling using neu-rophysiological datardquo in Proceedings of 15th InternationalConference on Ubiquitous Computing and Communicationsand 2016 International Symposium on Cyberspace and Security(IUCC-CSS) pp 138ndash144 IEEE Granada Spain December2016

[24] N Jadhav R Manthalkar and Y Joshi ldquoElectroencephalography-Based emotion recognition using gray-level co-occurrence matrix featuresrdquo in Proceedings of InternationalConference on Computer Vision and Image Processingpp 335ndash343 Springer Roorkee Uttarakhand December2016

[25] N ammasan K Moriyama K-i Fukui and M NumaoldquoFamiliarity effects in EEG-based emotion recognitionrdquo BrainInformatics vol 4 no 1 pp 39ndash50 2016

[26] D Wang and Y Shang ldquoModeling physiological data withdeep belief networksrdquo International Journal of Informationand Education Technology vol 3 no 5 pp 505ndash511 2013

[27] H Xu andK-N Plataniotis ldquoAffective states classification usingEEG and semi-supervised deep learning approachesrdquo in Pro-ceedings of IEEE 18th International Workshop on MultimediaSignal pp 1ndash6 IEEE Montreal Canada September 2016

[28] X Li D Song P Zhang Y Hou and B Hu ldquoDeep fusion ofmulti-channel neurophysiological signal for emotion recog-nition and monitoringrdquo International Journal of Data Miningand Bioinformatics vol 18 no 1 pp 1ndash27 2017

[29] S Tripathi S Acharya R-D Sharma et al ldquoUsing deep andconvolutional neural networks for accurate emotion classi-fication on DEAP datasetrdquo in Proceedings of Twenty-NinthAAAI Conference on Innovative Applications of Artificial

10 Computational Intelligence and Neuroscience

Intelligence pp 4746ndash4752 AAAI San Francisco CA USAFebruary 2017

[30] C Ikuta Y Uwate and Y Nishio ldquoMulti-layer perceptronwith positive and negative pulse glia chain for solving two-spirals problemrdquo in Proceedings of 2012 International JointConference on Neural Networks pp 1ndash6 IEEE BrisbaneAustralia June 2012

[31] Z-Q Geng and Y-K Zhang ldquoAn improved deep beliefnetwork inspired by glia chainsrdquo Acta Automatica Sinicavol 42 no 6 pp 943ndash952 2016

[32] S Koelstra C Muehl M Soleymani et al ldquoDEAP a databasefor emotion analysis using physiological signalsrdquo IEEETransaction on Affective Computing vol 3 no 1 pp 18ndash312012

[33] M Khezri M Firoozabadi and A R Sharafat ldquoReliableemotion recognition system based on dynamic adaptive fu-sion of forehead biopotentials and physiological signalsrdquoComputer Methods and Programs in Biomedicine vol 122no 2 pp 149ndash164 2015

[34] J Atkinson and D Campos ldquoImproving BCI-based emotionrecognition by combining EEG feature selection and kernelclassifiersrdquo Expert Systems with Applications vol 47pp 35ndash41 2016

[35] C Li C Xu and Z Feng ldquoAnalysis of physiological foremotion recognition with the IRS modelrdquo Neurocomputingvol 178 pp 103ndash111 2016

[36] Z Yin M-Y Zhao Y-X Wang J Yang and J ZhangldquoRecognition of emotions using multimodal physiologicalsignals and an ensemble deep learning modelrdquo ComputerMethods and Programs in Biomedicine vol 140 pp 93ndash1102017

[37] X Li D-W Song P Zhang et al ldquoEmotion recognition frommulti-channel EEG data through convolutional recurrentneural networkrdquo in Proceedings of IEEE International Con-ference on Bioinformatics and Biomedicine pp 352ndash359 IEEEShenzhen China December 2016

[38] X Li P Zhang and D-W Song ldquoEEG based emotionidentification using unsupervised deep feature learningrdquo inProceedings of SIGIR2015 Workshop on Neuro-PhysiologicalMethods in IR Research August 2015

Computational Intelligence and Neuroscience 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 2: RecognitionofEmotionsUsingMultichannelEEGDataand DBN-GC …downloads.hindawi.com/journals/cin/2018/9750904.pdf · 2019-07-30 · types of deep learning approaches, stacked denoising

EEG power spectral density (PSD) in alpha (8ndash13Hz) bandwas reported to be significantly correlated with the level ofvalence [19] e EEG PSDs in delta (1ndash4Hz) and theta(4ndash7Hz) extracted from three central channels also containsalient information related to both arousal and valencelevels [13] Based on time-frequency analysis theHilbertndashHuang spectrum (HHS) [20] and discrete wavelettransform method [21 22] were proposed in the emotionclassification tasks e above research shows that timedomain characteristics frequency domain characteristicsand time-frequency characteristics of EEG signals canprovide salient information related to emotional statesseparately

Usually machine learning methods are used to es-tablish emotion recognition models Samara et al fusedstatistical measurements band power from the β δ and θwaves and high-order crossing of the EEG signal byemploying a support vector machine (SVM) as the classifier[23] Jadhav et al proposed a novel technique for EEG-based emotion recognition using gray-level co-occurrencematrix- (GLCM-) based features and k-nearest neighbors(KNN) classifier [24] ammasan et al applied threecommonly used algorithms to classify emotional classes asupport vector machine based on the Pearson VII kernelfunction (PUK) kernel a multilayer perceptron (MLP) withone hidden layer and C45 [25] Recently various deeplearning (DL) approaches were investigated for EEG-basedemotion classification e standard deep belief networks(DBNs) were employed by Wand and Shang to extractfeatures from raw physiological data to recognize the levelsof arousal valence and liking [26] In reference [27] twotypes of deep learning approaches stacked denoisingautoencoder and deep belief networks were applied asfeature extractors for the affective states classificationproblem using EEG signals Li et al designed a hybrid deeplearning model that combines the convolutional neuralnetwork (CNN) and recurrent neural network (RNN) toextract EEG features [28]

Compared with the traditional machine learningmethods DL has achieved the promising results Howeverthere still exist two challenges in the multichannel EEGsignals based emotion recognition Firstly seeing that timedomain characteristics frequency domain characteristicsand time-frequency characteristics of EEG signals containsalient information related to emotional states naturally thecomplementarity between different types of features derivedfrom these domain characteristics respectively is consid-ered us feature extraction and feature fusion of multi-channel EEG signals in time domain frequency domain andtime-frequency domain need to be investigated to achievebetter performance Generally a simple deep model such asDBN or CNN can abstract the intermediate representationsof multichannel EEG features and achieve feature fusion atthe feature level [29] Nevertheless in view of the fact of thehigh dimensionality and limited training samples of thephysiological data too many nodes in each layer of the deepnetwork will lead to the model overfitting problem Sec-ondly capturing the correlation information betweendifferent channels of EEG signal and extracting depth

correlation feature which are ignored by the researchersneeds to be taken into consideration when performingfeature fusion using the deep model

To address the two issues mentioned above an in-tegrated deep learning framework composed of DBN-GC isproposed in this paper As a special nerve cell in humanbrain glia cell can transmit signals to neurons and other gliacells erefore researchers paid attention to the charac-teristics of glia cells and applied it to the artificial neuralnetworks [30 31] In the framework raw multidomainfeatures are obtained from multichannel EEG signals enthe intermediate representations of the raw multidomainfeatures are separately extracted by member DBN-GC inwhich glia chains work for mining interchannel correlationand help to optimize learning process Finally a discrimi-native RBM is used to obtain the emotion predictions In theexperiment the effectiveness of our method is validated onthe multichannel EEG data in DEAP dataset which is awidely used for emotion recognition

e rest of this paper is organized as follows A detaileddescription of the proposed deep learning framework basedon DBN-GC is presented in Section 2 e experimentalresults and discussions are reported in Section 3 e lastSection 4 briefly concludes the work

2 Methods

21 Database In this research the DEAP dataset is used foremotion analysis DEAP is an open source dataset developedby the research team at Queenrsquos University in Marie London[32] It mainly recorded the multimodal physiological signalsproduced by 32 volunteers under the stimulus of the selectedvideose multimodal physiological signals include the EEGand peripheral physiological signals Each volunteer needed towatch 40 one-minute long videos While each video waspresented the EEG and peripheral physiological signals ofvolunteers were recorded synchronously It should be notedthat the EEG was recorded from 32 sites (Fp1 AF3 F3 F7FC5 FC1 C3 T7 CP5 CP1 P3 P7 PO3 O1 Oz Pz Fp2AF4 Fz F4 F8 FC6 FC2 Cz C4 T8 CP6 CP2 P4 P8 PO4and O2) Finally the subjective-ratings of arousal valenceliking and dominance on a scale of 1ndash9 were provided In thisstudy we only focus on arousal and valence scales us atwo-dimensional emotional model (illustrated in Figure 1)can be built where the two dimensions are arousal and va-lence respectively We divide and label the trials into twoclasses for valence and arousal respectively (pleasant gt5unpleasant le5 aroused gt5 relaxed le5)

22DataPreprocessingandFeatureExtraction In this studyonly EEG signals are employed for the emotion recogni-tion EEG signals recorded with 512Hz sampling frequencyare downsampled to 128Hz en filtering is implementedby a band-pass filter with cutoff frequencies of 40 and450 Hz

In order to make full use of the salient informationregarding the emotional states in EEG signals four types ofraw features which characterizing the information in time

2 Computational Intelligence and Neuroscience

domain frequency domain and time-frequency domainrespectively are extracted from the EEG signals [32ndash36] andthe detailed description is shown in Table 1 e 14 EEGchannel pairs for achieving power dishyerences include Fp2-Fp1 AF4-AF3 F4-F3 F8-F7 FC6-FC5 FC2-FC1 C4- C3T8-T7 CP6-CP5 CP2-CP1 P4-P3 P8-P7 PO4-PO3 andO2-O1 us the dimension of the feature vector for oneinstance is 664 and the label for each instance is 2-dimensional

For one volunteer the size of the corresponding datamatrix is 40 times 664 (videosinstances times features) For all 32volunteers 40 times 32 1280 instances are available ecorresponding data matrix of each volunteer was stan-dardized to remove the dishyerence in feature scales

23 Ensemble Deep Learning Framework

231 Improved DBN with Glia Chains For emotion rec-ognition tasks deep learning methods hypothesize that ahierarchy of intermediate representations from the EEGraw features is necessary to characterize the underlyingsalient information related to dishyerent emotional statesDeep belief network which is composed of many restrictedBoltzmann machines in the stacking way has the strongability to learn high-level representations beneting from adeep structure-based learning mechanism with multiplehidden layers

As shown in Figure 2 the output of the rst RBM is usedas the input of the second RBM Similarly the third RBM istrained on the output of the second RBMrough this waya deep hierarchical model can be constructed that learnsfeatures from low-level features to obtain the high-levelrepresentation

In view of the fact that there are no interconnectionsamong the neural units of DBN in the same layer it is hard toexploit the mutual information of dishyerent neural units inthe same layer is means that DBN is hard to work formining interchannel and interfrequency correlation in-formation from multichannel EEG signals in the emotionrecognition tasks Considering this an improved DBN withglia chains is introduced in this paper

e structure of DBN-GC can be seen from Figure 3 Inaddition to the two level units of each RBM there is a group ofglia cells represented by stars and linked into a chain structureEach glia cell is also connected to a unit in the hidden layer ofRBM as shown in Figure 4 ere is no weight between theglia cells and the corresponding hidden units e eshyect of allglia cells in the training process can be directly applied to thehidden units and the outputs of the hidden layer nodes can beadjusted accordingly rough the connection of glia cellseach glia cell can also transmit activated signal to other gliacells and adjust the glia eshyect of other glia cells

Table 1 Detailed description of raw EEG features

Index Type Domain Notations of theextracted features

No1ndash128

Statisticalmeasures

Timedomain

Mean variance zero-crossing rate and

approximate entropy of32 EEG channels

(4 features times 32 channels)

No129ndash288

Powerfeatures

Frequencydomain

Average PSD in theta(4ndash8Hz) slow-alpha

(8ndash10Hz) alpha (8ndash12Hz)beta (12ndash30Hz) and gamma(30ndash45Hz) bands for all EEG

channels (5 power times 32channels)

No289ndash344

Powerdishyerences

Frequencydomain

Dishyerence of average PSD intheta alpha beta and gammabands for 14 EEG channelpairs between right and leftscalp (4 power dishyerences times

14 channel pairs)

No345ndash664

HHSfeatures

Time-frequencydomain

Average values of squaredamplitude and instantaneousfrequency of HHS-based

timendashfrequencyrepresentation in delta(1ndash4Hz) theta (4ndash8Hz)alpha (8ndash12Hz) beta

(12ndash30Hz) and gamma(30ndash45Hz) bands for all EEGchannels (2 features times 5 bands

times 32 channels)

Valence

Arousal

Positive

Negative

HighLow

Peaceful Happiness

Sadness Angry

Neutral

Figure 1 Arousal-valence plane

Visible layers

Hiddenlayers

RBM1

RBM2

RBM3

RBM4

Figure 2 Structure of deep belief network

Computational Intelligence and Neuroscience 3

For example if the output of a hidden unit h1 is higherthan the prespecied threshold the corresponding glia cellg1 will be activated and then a signal is transmitted to theglia cell g2 When the signal is passed to g2 the glia cell g2will be activated no matter whether or not the output of thehidden unit h2 reached the prespecied threshold en gliacell g2 will produce the second signal to spread Meanwhilethe signal generated by g1 will continue to spread In orderto simplify the calculation all signals produced by glia cellsare propagated along the specic direction of the glia chainat is the signals are transmitted from the rst glia cell onthe chain to the last

For RBM with a glia chain the output rule of hiddenunits is updated as follows

hj σ hlowastj + α middot gj( ) (1)

where hlowastj is the output value of the hidden node j before theoutput rule is updated gj is the glia eshyect value of thecorresponding glia cell α is the weight coeiexclcient of gliaeshyect value and σ() is the sigmoid function e weightcoeiexclcient α is set manually which can control the eshyect ofglia eshyect on the hidden units hlowastj can be calculated as

hlowastj sumi

Wijvi + cj (2)

where Wij is the connection weight of the visual unit i andthe hidden unit j vi is the state value of the visible unit i andcj is the bias value of the hidden unit j Instead of randomsampling activation probability is employed as output foreach hidden unit which can reduce sampling noise andspeed up learning

e glia eshyect value of glia cell gj is dened as

gj(t) 1 hlowastj gt θcupgjminus1(tminus 1) 1( )cap tj ltT( )βgj(tminus 1) others

(3)

where θ is the prespecied threshold T is an unresponsivetime threshold after activation and β represents the at-tenuation factor Every time the signal produced by anactivated glia cell is passed to the next glia celle activationof a glia cell will depend on whether the output of thecorresponding hidden unit reaches the prespeciedthreshold θ or whether the previous glia cell conveys a signalto it Meanwhile the dishyerence tj between its last activationtime and the current time must be less than T If the glia cellis activated it will transmit a signal to the next glia cellotherwise it will not produce signals and its glia eshyect willgradually decay

After integrating the glia cell mechanism the learningalgorithm of RBM is improved and the pseudocodes of thelearning algorithm are listed in Algorithm 1

e training process of a DBN-GC which is similar tothat of DBN consists of 2 steps pretraining and ne-tuningGlia cell mechanism only acts on the pretraining process Inthe pretraining phase a greedy layer-wise unsupervisedmethod is adopted to train each RBM and the hidden layerrsquosoutput of the previous RBM is used as the visible layerrsquosinput of the next RBM In the ne-tuning phase backpropagation is performed to ne-tune the parameters of theDNB-GC

232 DBN-GC-Based Ensemble Deep Learning ModelConsidering that the raw EEG features in Table 1 may sharedishyerent hidden properties across dishyerent time domain andfrequency domain modalities we proposed a DBN-GC-based ensemble deep learning model which implements aDBN-GC-based network on homogenous feature subsetindependently e feature vectors derived from dishyerentfeature subsets are fed into the corresponding DBN-GCrespectively and the higher feature abstractions of eachfeature subset are obtained as the outputs of the last hiddenlayer in the corresponding DBN-GC en a discriminativeRBM is built upon the combined higher feature abstractionse network architecture is illustrated in Figure 5 eensemble deep learning model consists of three parts theinput layer ve parallel DBN-GCs and a discriminativeRBM

e overall raw EEG feature set in Table 1 can be denedas F0 which is split into ve nonoverlapped physiologicalfeature subsets F1 F2 F3 F4 and F5 e statistical measuresfrom time domain construct subset F1 and the multichannelEEG PSDs construct subset F2 Another subset F3 is built byEEG power dishyerences In view of the heterogeneity ofmultichannel HHS features in time domain and frequencydomain the HHS features can be grouped into two subsetsF4 and F5 indicating squared amplitude and instantaneousfrequency respectively

Dene feature vectors formed by the feature subset Fi asx(Fi) e features in F0 form the input vector x(F0) of the

RBM withglia chainVisible

layer

Hiddenlayer

h1

v1

g1

vi

hjgj

vn

hngm

Figure 4 Structure of RBM with glia chain

RBM with glia chain

RBM with glia chain

RBM with glia chain

RBM with glia chain

Hiddenlayers

Visible layer

Figure 3 Structure of deep belief network with glia chains

4 Computational Intelligence and Neuroscience

ensemble deep learning framework which is fed into theinput layer en the input vector x(F0) is split into fivesubvectors x(F1) x(F2) x(F3) x(F4) and x(F5) e fivesubvectors are the input to the corresponding DBN-GCrespectively e five DBN-GC based deep models are builtfor learning the hidden feature abstractions of each raw EEGfeature subset and the hidden feature abstractions are de-scribed as s1(x(F1)) s2(x(F2)) s3(x(F3)) s4(x(F4)) ands5(x(F5)) si(x(Fi)) is the output vector of the last hidden layerof the corresponding DBN-GC i en s1(x(F1)) s2(x(F2))s3(x(F3)) s4(x(F4)) and s5(x(F5)) are merged into a vectorwhich is fed into the discriminative RBM to recognizeemotion states

When building the DBN-GC-based ensemble deeplearning model the five DBN-GCs are trained firstly Anadditional two-neuron output layer that corresponds tobinary emotions is added when training each DBN-GCen the discriminative RBM is built upon the combinedhigher feature abstractions derived from the five DBN-GCsTo determine the DBN-GCsrsquo hyperparameters differentcombinations of hyperparameters are tested and the pa-rameter combination with the minimal recognition error isadopted

3 Results and Discussion

In view of the limited sample size in the dataset cross-validation techniques are adopted in the experiments eensemble deep learning model is trained and tested via 10-fold cross-validation technique with a participant-specificstyle For each of the 32 volunteers the corresponding 40instances are divided into 10 subsets 9 subsets (36 instances)are assigned to the training set and the remaining 1 (4

instances) is assigned to the test set e above process isrepeated 10 times until all subsets are tested

31 Comparison between DNB and DBN-GC In order tostudy the learning performance of DBN-GC we first usethree feature subsets (F6 F7 and F8) to train DBNs andDBN-GCs respectively e three subsets are given as fol-lows F6 F1 F7 F2 cup F3 and F8 F4 cup F5e three featuresubsets represent time domain characteristics frequencydomain characteristics and time-frequency characteristicsrespectively A DBN and a DBN-GC are trained by the samefeature subset and they have the same hyperparameters asshown in Table 2 In addition the parameters of the DBNand the DBN-GC which share the same feature subset suchas learning rate are all set to the same value e six modelsperform the same emotion recognition task and the metricsfor recognition performance adopt accuracy and F1-score

e detailed recognition performance comparisons onarousal and valence dimensions are illustrated in Figure 6

Each column represents the statistical results of 32participants Figures 6(a) and 6(c) show the classificationaccuracy and F1-score of arousal dimension Figures 6(b)and 6(d) show the classification accuracy and F1-score ofvalence dimension As we can see from Figure 6 no matterwhich feature subset is used the DBN-GC model greatlyoutperforms the corresponding baseline DNN model with ahigher median of accuracy and F1-score and a relatively lowstandard deviation In the three DBN-GC models DBN7which is built by the feature subset F7 achieves the highestrecognition accuracy (07242 on arousal and 07310 onvalence) and F1-score (06631 on arousal and 06775 onvalence) e results show that the frequency domain

Input a training sample x1 learning rate ε glia effect vector gOutput weight matrix W bias vector of the visible layer b bias vector of the hidden layer cStart trainingInitialize v(0) v(0) x1 (initialize state value of the visible layer v)for j 1 m (for all hidden units)

p (hj(0) 1 | v(0)) σ (1113944i

Wijv(0)i + cj + cj)

End forExtract a sample h(0) according to h(0)simp (h(0) | v(0))for i 1 n (for all visible units)

p (v(1)i 1 | h(0)) σ (ΣjWij hj(0) + bi)

End forExtract a sample v(1) according to v(1)simp (v(1) | h(0))for j 1 m (for all hidden units calculate the output value without glia effect)hj(1)lowast ΣiWij vi

(1) + cjEnd forupdate the glia effect vector gfor j 1 m (for all hidden units calculate the output value with glia effect)

hj(1) σ (hj

(1)lowast + α middot gj)End for

Update parametersW⟵ W + ε (p (h(0) 1 | v(0))v(0)T minus p(h(1) 1 | v(1))v(1)T)b⟵ b + ε (v(0) minus v(1))c⟵ c + ε (p (h(0) 1 | v(0)) minus p (h(1) 1 | v(1)))

ALGORITHM 1 Pseudocodes for training the RBM with glia chain

Computational Intelligence and Neuroscience 5

characteristics of EEG signals can provide salient in-formation regarding emotion states

e results validate that the glia chain can improve thelearning performance of the deep structurerough the gliachains the hidden layer units in DBN-GC can transferinformation to each other and the DBN-GC model canobtain the correlation information between the same hiddenlayer units us the improved DNB model can learn morediscriminative features For EEG-based emotion recognitiontask the DNN-GC can mine interchannel correlation andutilize interchannel information which is often ignored byother emotion recognition studies

32 Results of the DBN-GC-Based Ensemble Deep LearningModel en the proposed DBN-GC-based ensemble deeplearning model is employed to perform the emotion rec-ognition task Each parallel DBN-GC in the ensemble deeplearning model has 3 hidden layers e numbers of hiddenneuron of each parallel DBN-GC are listed in Table 3rough the ve parallel DBN-GCs the samplesrsquo featuredimensionality is reduced from 664 to 350

Table 4 compares the average recognition performancebetween the proposed DBN-GC-based ensemble deeplearning model and several deep learning-based studies onthe same database Specically Tripathi et al divided the

Input layer Parallel DBN-GCs Discriminative RBM

v1 h1 h2 hn v1

h1

Label

x (F0)

DBN-GC 1

DBN-GC 2

DBN-GC 5

x (F1)

x (F2)

x (F5)

s1 (x(F1))

s2 (x(F2))

s5 (x(F5))

Figure 5 Architecture of DBN-GC-based ensemble deep learning model

Table 2 Hyperparameters of DBNs and DBN-GCs

F6 F7 F8DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC8

v1 128 128 216 216 320 320h1 100 100 200 200 300 300h2 80 80 135 135 280 280h3 45 45 90 90 125 125o1 2 2 2 2 2 2Note v1 represents the input layer o1 represents the output layer and hi represents the i-th hidden layer

6 Computational Intelligence and Neuroscience

8064 readings per channel into 10 batches For each batch 9characteristics such as mean and variance were extracteden a deep neural network and a convolutional neural

network were used for emotion recognition [29] Li et alproposed the frame-based EEG features through wavelet andscalogram transform and designed a hybrid deep learningmodel which combined CNN and RNN [37] In addition Liet al also trained a two-layer DBN to extract high-levelfeatures for each channel and then a SVM with RBF kernelis employed as the classier [38] Wang and Shang presentedthe DBN-based system that extracted features from rawphysiological data and 3 classiers were built to predictemotion states [26] In view of that the above studies did notintroduce F1-score as the metrics for recognition perfor-mance the average recognition performance of the proposedmodel is also compared with that of reference [32] In thisreference Koelstra et al analyzed the central nervous system(CNS) peripheral nervous system (PNS) and multimediacontent analysis features for emotion recognition Consid-ering the proposed DBN-GC-based method in this paper isbased on the EEG signal Table 4 only lists the recognitionresults of the CNS feature-based single modality in reference[32] e DEAP dataset is used in all references in Table 4and the trials are divided into two classes for valence andarousal respectively (ratings divided as more than 5 and lessthan 5) in all references in Table 4

As can be seen from Table 4 the performance of theDBN-GC-based ensemble deep learning model regardingthe recognition accuracy outperforms most of the abovedeep classiers Meanwhile the F1-scores achieved by theproposed model are obviously superior to 05830 and 05630reported by reference [32] e proposed method provides07683 mean recognition accuracy (MRA) on valence whichis lower than the highest MRA reported on valence (08141)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1Ac

cura

cy (a

rous

al)

(a)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

Accu

racy

(val

ence

)

(b)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

F1 sc

ore (

arou

sal)

(c)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

F1 sc

ore (

vale

nce)

(d)

Figure 6 Comparison of emotion recognition results between DBN and DBN-GC

Table 3 Hyperparameters of each parallel DBN-GC in the en-semble deep learning model

h1 h2 h3DBN-GC1 110 105 90DBN-GC2 145 145 120DBN-GC3 55 55 45DBN-GC4 120 55 45DBN-GC5 155 80 50Sum 585 440 350

Table 4 Average recognition performance comparison betweenthe DBN-GC-based ensemble deep learning model and severalreported studies

Arousal Valence

Accuracy F1-score Accuracy F1-

scoreDeep neural network [29] 07313 mdash 07578 mdashConvolutional neuralnetwork [29] 07336 mdash 08141 mdash

CNNRNN [37] 07412 mdash 07206 mdashDBN-SVM [37] 06420 mdash 05840Wang and Shang [26] 05120 mdash 06090 mdashCNS feature-based singlemodality [32] 06200 05830 05760 05630

DBN-GC-based ensembledeep learning model 07592 06931 07683 07015

Computational Intelligence and Neuroscience 7

is may be due to that the CNN model proposed by Tri-pathi et al benets from suiexclcient training and validatinginstances In addition the performance of the ensemble deeplearning model also outperforms the three DBN-GCs inTable 3 which are trained by a single-feature subset isindicates that time domain characteristics frequency do-main characteristics and time-frequency characteristics ofEEG signals should be complementary in emotion recog-nition and the proposed method can integrate dishyerenttypes of characteristics eshyectively

33 Parameter Selection Each parallel DBN-GC in theproposed ensemble deep learning model contains threeimportant parameters the weight coeiexclcient of glia eshyectvalue α the attenuation factor β and the glia threshold θese three parameters will determine the eshyect of glia cellson DBN and then ashyect the performance of the proposedmodel Since there is no self-adaptive adjustment methodthe three parameters are set manually and take 20 values inthe range of 0 to 1 respectively with the interval 005 Underthe dishyerent values of each parameter the MRA on valence

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Weight coefficient of glial effect

ArousalValence

Figure 7 MRA of the proposed ensemble deep learning model with dishyerent values of glia eshyect weight

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Attenuation factor

ArousalValence

Figure 8 MRA of the proposed ensemble deep learning model with dishyerent values of attenuation factor

8 Computational Intelligence and Neuroscience

and the MRA on arousal are taken as the nal evaluationbasis and the analysis results are shown in Figures 7ndash9

As can be seen from Figure 7 when the glia eshyect weightis between 005 and 1 the MRA on arousal as well as theMRA on valence copyuctuates continuously e highest MRAon arousal (7612) is obtained as the glia eshyect weight is setto 075 For MRA on valence the higher values will appearwhen the weight value is close to 015 or 080 Taking intoaccount these two indicators simultaneously it is appro-priate to set the weight coeiexclcient to 080

Figure 8 shows the results of arousal classication andvalence classication with dishyerent values of the attenuationfactor When the value of attenuation factor is between 005and 035 the MRA on arousal as well as the MRA on valencecopyuctuates greatly With the attenuation factor increased to04 both the MRA on arousal and the MRA on valenceincrease rapidly Once the attenuation factor exceeds 05 thetwo MRAs have been decreasing slowly us it is appro-priate to set the attenuation factor to 040 or 050

Figure 9 shows the results of arousal classication andvalence classication with dishyerent values of glia thresholdAlthough the highest value of MRA on arousal occurs withthe glia threshold set to 035 the MRA is more stable whenthe glia threshold is between 065 and 1 For the MRA onvalence its value has been rising slowly when the attenuationfactor exceeds 025 It is appropriate that the glia threshold iswithin the range of 070 to 080

4 Conclusions

In this paper we presented an ensemble deep learning modelwhich integrates parallel DBN-GCs and a discriminativeRBM for emotion recognition e interchannel correlationinformation from multichannel EEG signals which isoften neglected contains salient information regarding to

emotion states and the chain structure of glia cells in DBN-GC has the ability in mining interchannel correlation in-formation In addition the time domain characteristicsfrequency domain characteristics and time-frequencycharacteristics of EEG signals should be complementaryfor emotion recognition and the ensemble deep learningframework benets from the comprehensive fusion ofmultidomain feature abstractions e reliability of theDBN-GC and the ensemble deep learning framework-basedfusion methods is validated by the experiments based onDEAP database

Data Availability

e DEAP dataset used in our manuscript is a dataset foremotion analysis using electroencephalogram (EEG) andphysiological and video signals e DEAP dataset isavailable at httpwwweecsqmulacukmmvdatasetsdeap Anyone interested in using this dataset will have toprint sign and scan an EULA (end-user license agreement)and return it via e-mail en a username and password todownload the data will be provided e dataset was rstpresented in reference [32] DEAP a database for emotionanalysis using physiological signals

Conflicts of Interest

e authors declare that there are no concopyicts of interestregarding the publication of this paper

Acknowledgments

is paper was supported by the China National NatureScience Foundation (Nos 61502150 and 61300124) Foun-dation for University Key Teacher by Henan Province underGrant no 2015GGJS-068 Fundamental Research Funds for

ArousalValence

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Glia threshold

Figure 9 MRA of the proposed ensemble deep learning model with dishyerent values of glia threshold

Computational Intelligence and Neuroscience 9

the Universities of Henan Province under Grant noNSFRF1616 Foundation for Scientific and TechnologicalProject of Henan Province under Grant no 172102210279and Key Scientific Research Projects of Universities inHenan (No 19A520004)

References

[1] M-F Pedro M-P Arquimedes J-C Antoni andJ M B Rubio ldquoEvaluating the research in automatic emotionrecognitionrdquo IETE Technical Review vol 31 no 3 pp 220ndash232 2014

[2] M Chen X He J Yang and H Zhang ldquo3-D convolutionalrecurrent neural networks with attention model for speechemotion recognitionrdquo IEEE Signal Processing Letters vol 25no 10 pp 1440ndash1444 2018

[3] Z-T Liu Q Xie M Wu W-H Cao Y Mei and J-W MaoldquoSpeech emotion recognition based on an improved brainemotion learning modelrdquo Neurocomputing vol 309pp 145ndash156 2018

[4] C-N Anaqnostopoulos T Iliou and I Giannoukos ldquoFea-tures and classifiers for emotion recognition from speech asurvey from 2000 to 2011rdquo Artificial Intelligence Reviewvol 43 no 2 pp 155ndash177 2015

[5] S-E Kahou V Michalski K Konda et al ldquoRecurrent neuralnetworks for emotion recognition in videordquo in Proceedings of2015 ACM International Conference onMultimodal Interactionpp 467ndash474 ACM Seattle WC USA November 2015

[6] Y Huang J Yang P Liao and J Pan ldquoFusion of facial ex-pressions and EEG for multimodal emotion recognitionrdquoComputational Intelligence and Neuroscience vol 2017 Ar-ticle ID 2107451 8 pages 2017

[7] X-W Wang D Nie and B-L Lu ldquoEmotional state classi-fication from EEG data using machine learning approachrdquoNeurocomputing vol 129 pp 94ndash106 2014

[8] H J Yoon and S Y Chung ldquoEEG-based emotion estimationusing Bayesian weighted-log-posterior function and percep-tron convergence algorithmrdquo Computers in Biology andMedicine vol 43 no 12 pp 2230ndash2237 2013

[9] A-N Abeer H Manar A-O Yousef and A-W AreejldquoReview and classification of emotion recognition based onEEG brain-computer interface system research a systematicreviewrdquo Applied Sciences-Basel vol 7 no 12 p 1239 2017

[10] Y Liu and O Sourina ldquoReal-time fractal-based valence levelrecognition from EEGrdquo in Transactions on ComputationalScience XVIII vol 7848 pp101ndash120 Springer BerlinGermany 2013

[11] M Murugappan N Ramachandran and Y Sazali ldquoClassi-fication of human emotion from EEG using discrete wavelettransformrdquo Journal of Biomedical Science and Engineeringvol 3 no 4 pp 390ndash396 2010

[12] T-Y Chai W-S San C-S Tan andM Rizon ldquoClassificationof human emotions from EEG signals using statistical featuresand neural networkrdquo International Journal of IntegratedEngineering vol 1 no 3 pp 1ndash6 2009

[13] C Frantzidis C Bratsas C Papadelis E KonstantinidisC Pappas and P D Bamidis ldquoToward emotion awarecomputing an integrated approach using multichannelneurophysiological recordings and affective visual stimulirdquoIEEE Transactions on Information Technology in Biomedicinevol 14 no 3 pp 589ndash597 2010

[14] B Hjorth ldquoEEG analysis based on time domain propertiesrdquoElectroencephalography and Clinical Neurophysiology vol 29no 3 pp 306ndash310 1970

[15] K Ansari-asl G Chanel and T Pun ldquoA channel selectionmethod for EEG classification in emotion assessment basedon synchronization likelihoodrdquo in Proceedings of 15th Eu-ropean Signal Processing Conference pp 1241ndash1245 SpringerPoznan Poland September 2007

[16] E Kroupi A Yazdani and T Ebrahimi ldquoEEG correlates ofdifferent emotional states elicited during watching musicvideosrdquo in Proceedings of 4th International Conference onAffective Computing and Intelligent Interaction pp 457ndash466Springer Memphis TN USA October 2011

[17] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from EEG using higher order crossingsrdquo IEEETransactions on Information Technology in Biomedicinevol 14 no 2 pp 186ndash197 2010

[18] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from brain signals using hybrid adaptive filtering andhigher order crossings analysisrdquo IEEE Transactions on Af-fective Computing vol 1 no 2 pp 81ndash97 2010

[19] G K Verma and U S Tiwary ldquoMultimodal fusion frame-work a multiresolution approach for emotion classificationand recognition from physiological signalsrdquo NeuroImagevol 102 pp 162ndash172 2014

[20] S K Hadjidimitriou and L J Hadjileontiadis ldquoToward anEEG-based recognition of music liking using time-frequencyanalysisrdquo IEEE Transactions on Biomedical Engineeringvol 59 no 12 pp 3498ndash3510 2012

[21] M Akin ldquoComparison of wavelet transform and FFTmethodsin the analysis of EEG signalsrdquo Journal of Medical Systemsvol 26 no 3 pp 241ndash247 2002

[22] M Murugappan M Rizon S Yaacob et al ldquoEEG featureextraction for classifying emotions using FCM and FKMrdquoInternational Journal of Computers and Communicationsvol 1 no 2 pp 21ndash25 2007

[23] A Samara M-L-R Menezes and L Galway ldquoFeature ex-traction for emotion recognition and modeling using neu-rophysiological datardquo in Proceedings of 15th InternationalConference on Ubiquitous Computing and Communicationsand 2016 International Symposium on Cyberspace and Security(IUCC-CSS) pp 138ndash144 IEEE Granada Spain December2016

[24] N Jadhav R Manthalkar and Y Joshi ldquoElectroencephalography-Based emotion recognition using gray-level co-occurrence matrix featuresrdquo in Proceedings of InternationalConference on Computer Vision and Image Processingpp 335ndash343 Springer Roorkee Uttarakhand December2016

[25] N ammasan K Moriyama K-i Fukui and M NumaoldquoFamiliarity effects in EEG-based emotion recognitionrdquo BrainInformatics vol 4 no 1 pp 39ndash50 2016

[26] D Wang and Y Shang ldquoModeling physiological data withdeep belief networksrdquo International Journal of Informationand Education Technology vol 3 no 5 pp 505ndash511 2013

[27] H Xu andK-N Plataniotis ldquoAffective states classification usingEEG and semi-supervised deep learning approachesrdquo in Pro-ceedings of IEEE 18th International Workshop on MultimediaSignal pp 1ndash6 IEEE Montreal Canada September 2016

[28] X Li D Song P Zhang Y Hou and B Hu ldquoDeep fusion ofmulti-channel neurophysiological signal for emotion recog-nition and monitoringrdquo International Journal of Data Miningand Bioinformatics vol 18 no 1 pp 1ndash27 2017

[29] S Tripathi S Acharya R-D Sharma et al ldquoUsing deep andconvolutional neural networks for accurate emotion classi-fication on DEAP datasetrdquo in Proceedings of Twenty-NinthAAAI Conference on Innovative Applications of Artificial

10 Computational Intelligence and Neuroscience

Intelligence pp 4746ndash4752 AAAI San Francisco CA USAFebruary 2017

[30] C Ikuta Y Uwate and Y Nishio ldquoMulti-layer perceptronwith positive and negative pulse glia chain for solving two-spirals problemrdquo in Proceedings of 2012 International JointConference on Neural Networks pp 1ndash6 IEEE BrisbaneAustralia June 2012

[31] Z-Q Geng and Y-K Zhang ldquoAn improved deep beliefnetwork inspired by glia chainsrdquo Acta Automatica Sinicavol 42 no 6 pp 943ndash952 2016

[32] S Koelstra C Muehl M Soleymani et al ldquoDEAP a databasefor emotion analysis using physiological signalsrdquo IEEETransaction on Affective Computing vol 3 no 1 pp 18ndash312012

[33] M Khezri M Firoozabadi and A R Sharafat ldquoReliableemotion recognition system based on dynamic adaptive fu-sion of forehead biopotentials and physiological signalsrdquoComputer Methods and Programs in Biomedicine vol 122no 2 pp 149ndash164 2015

[34] J Atkinson and D Campos ldquoImproving BCI-based emotionrecognition by combining EEG feature selection and kernelclassifiersrdquo Expert Systems with Applications vol 47pp 35ndash41 2016

[35] C Li C Xu and Z Feng ldquoAnalysis of physiological foremotion recognition with the IRS modelrdquo Neurocomputingvol 178 pp 103ndash111 2016

[36] Z Yin M-Y Zhao Y-X Wang J Yang and J ZhangldquoRecognition of emotions using multimodal physiologicalsignals and an ensemble deep learning modelrdquo ComputerMethods and Programs in Biomedicine vol 140 pp 93ndash1102017

[37] X Li D-W Song P Zhang et al ldquoEmotion recognition frommulti-channel EEG data through convolutional recurrentneural networkrdquo in Proceedings of IEEE International Con-ference on Bioinformatics and Biomedicine pp 352ndash359 IEEEShenzhen China December 2016

[38] X Li P Zhang and D-W Song ldquoEEG based emotionidentification using unsupervised deep feature learningrdquo inProceedings of SIGIR2015 Workshop on Neuro-PhysiologicalMethods in IR Research August 2015

Computational Intelligence and Neuroscience 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 3: RecognitionofEmotionsUsingMultichannelEEGDataand DBN-GC …downloads.hindawi.com/journals/cin/2018/9750904.pdf · 2019-07-30 · types of deep learning approaches, stacked denoising

domain frequency domain and time-frequency domainrespectively are extracted from the EEG signals [32ndash36] andthe detailed description is shown in Table 1 e 14 EEGchannel pairs for achieving power dishyerences include Fp2-Fp1 AF4-AF3 F4-F3 F8-F7 FC6-FC5 FC2-FC1 C4- C3T8-T7 CP6-CP5 CP2-CP1 P4-P3 P8-P7 PO4-PO3 andO2-O1 us the dimension of the feature vector for oneinstance is 664 and the label for each instance is 2-dimensional

For one volunteer the size of the corresponding datamatrix is 40 times 664 (videosinstances times features) For all 32volunteers 40 times 32 1280 instances are available ecorresponding data matrix of each volunteer was stan-dardized to remove the dishyerence in feature scales

23 Ensemble Deep Learning Framework

231 Improved DBN with Glia Chains For emotion rec-ognition tasks deep learning methods hypothesize that ahierarchy of intermediate representations from the EEGraw features is necessary to characterize the underlyingsalient information related to dishyerent emotional statesDeep belief network which is composed of many restrictedBoltzmann machines in the stacking way has the strongability to learn high-level representations beneting from adeep structure-based learning mechanism with multiplehidden layers

As shown in Figure 2 the output of the rst RBM is usedas the input of the second RBM Similarly the third RBM istrained on the output of the second RBMrough this waya deep hierarchical model can be constructed that learnsfeatures from low-level features to obtain the high-levelrepresentation

In view of the fact that there are no interconnectionsamong the neural units of DBN in the same layer it is hard toexploit the mutual information of dishyerent neural units inthe same layer is means that DBN is hard to work formining interchannel and interfrequency correlation in-formation from multichannel EEG signals in the emotionrecognition tasks Considering this an improved DBN withglia chains is introduced in this paper

e structure of DBN-GC can be seen from Figure 3 Inaddition to the two level units of each RBM there is a group ofglia cells represented by stars and linked into a chain structureEach glia cell is also connected to a unit in the hidden layer ofRBM as shown in Figure 4 ere is no weight between theglia cells and the corresponding hidden units e eshyect of allglia cells in the training process can be directly applied to thehidden units and the outputs of the hidden layer nodes can beadjusted accordingly rough the connection of glia cellseach glia cell can also transmit activated signal to other gliacells and adjust the glia eshyect of other glia cells

Table 1 Detailed description of raw EEG features

Index Type Domain Notations of theextracted features

No1ndash128

Statisticalmeasures

Timedomain

Mean variance zero-crossing rate and

approximate entropy of32 EEG channels

(4 features times 32 channels)

No129ndash288

Powerfeatures

Frequencydomain

Average PSD in theta(4ndash8Hz) slow-alpha

(8ndash10Hz) alpha (8ndash12Hz)beta (12ndash30Hz) and gamma(30ndash45Hz) bands for all EEG

channels (5 power times 32channels)

No289ndash344

Powerdishyerences

Frequencydomain

Dishyerence of average PSD intheta alpha beta and gammabands for 14 EEG channelpairs between right and leftscalp (4 power dishyerences times

14 channel pairs)

No345ndash664

HHSfeatures

Time-frequencydomain

Average values of squaredamplitude and instantaneousfrequency of HHS-based

timendashfrequencyrepresentation in delta(1ndash4Hz) theta (4ndash8Hz)alpha (8ndash12Hz) beta

(12ndash30Hz) and gamma(30ndash45Hz) bands for all EEGchannels (2 features times 5 bands

times 32 channels)

Valence

Arousal

Positive

Negative

HighLow

Peaceful Happiness

Sadness Angry

Neutral

Figure 1 Arousal-valence plane

Visible layers

Hiddenlayers

RBM1

RBM2

RBM3

RBM4

Figure 2 Structure of deep belief network

Computational Intelligence and Neuroscience 3

For example if the output of a hidden unit h1 is higherthan the prespecied threshold the corresponding glia cellg1 will be activated and then a signal is transmitted to theglia cell g2 When the signal is passed to g2 the glia cell g2will be activated no matter whether or not the output of thehidden unit h2 reached the prespecied threshold en gliacell g2 will produce the second signal to spread Meanwhilethe signal generated by g1 will continue to spread In orderto simplify the calculation all signals produced by glia cellsare propagated along the specic direction of the glia chainat is the signals are transmitted from the rst glia cell onthe chain to the last

For RBM with a glia chain the output rule of hiddenunits is updated as follows

hj σ hlowastj + α middot gj( ) (1)

where hlowastj is the output value of the hidden node j before theoutput rule is updated gj is the glia eshyect value of thecorresponding glia cell α is the weight coeiexclcient of gliaeshyect value and σ() is the sigmoid function e weightcoeiexclcient α is set manually which can control the eshyect ofglia eshyect on the hidden units hlowastj can be calculated as

hlowastj sumi

Wijvi + cj (2)

where Wij is the connection weight of the visual unit i andthe hidden unit j vi is the state value of the visible unit i andcj is the bias value of the hidden unit j Instead of randomsampling activation probability is employed as output foreach hidden unit which can reduce sampling noise andspeed up learning

e glia eshyect value of glia cell gj is dened as

gj(t) 1 hlowastj gt θcupgjminus1(tminus 1) 1( )cap tj ltT( )βgj(tminus 1) others

(3)

where θ is the prespecied threshold T is an unresponsivetime threshold after activation and β represents the at-tenuation factor Every time the signal produced by anactivated glia cell is passed to the next glia celle activationof a glia cell will depend on whether the output of thecorresponding hidden unit reaches the prespeciedthreshold θ or whether the previous glia cell conveys a signalto it Meanwhile the dishyerence tj between its last activationtime and the current time must be less than T If the glia cellis activated it will transmit a signal to the next glia cellotherwise it will not produce signals and its glia eshyect willgradually decay

After integrating the glia cell mechanism the learningalgorithm of RBM is improved and the pseudocodes of thelearning algorithm are listed in Algorithm 1

e training process of a DBN-GC which is similar tothat of DBN consists of 2 steps pretraining and ne-tuningGlia cell mechanism only acts on the pretraining process Inthe pretraining phase a greedy layer-wise unsupervisedmethod is adopted to train each RBM and the hidden layerrsquosoutput of the previous RBM is used as the visible layerrsquosinput of the next RBM In the ne-tuning phase backpropagation is performed to ne-tune the parameters of theDNB-GC

232 DBN-GC-Based Ensemble Deep Learning ModelConsidering that the raw EEG features in Table 1 may sharedishyerent hidden properties across dishyerent time domain andfrequency domain modalities we proposed a DBN-GC-based ensemble deep learning model which implements aDBN-GC-based network on homogenous feature subsetindependently e feature vectors derived from dishyerentfeature subsets are fed into the corresponding DBN-GCrespectively and the higher feature abstractions of eachfeature subset are obtained as the outputs of the last hiddenlayer in the corresponding DBN-GC en a discriminativeRBM is built upon the combined higher feature abstractionse network architecture is illustrated in Figure 5 eensemble deep learning model consists of three parts theinput layer ve parallel DBN-GCs and a discriminativeRBM

e overall raw EEG feature set in Table 1 can be denedas F0 which is split into ve nonoverlapped physiologicalfeature subsets F1 F2 F3 F4 and F5 e statistical measuresfrom time domain construct subset F1 and the multichannelEEG PSDs construct subset F2 Another subset F3 is built byEEG power dishyerences In view of the heterogeneity ofmultichannel HHS features in time domain and frequencydomain the HHS features can be grouped into two subsetsF4 and F5 indicating squared amplitude and instantaneousfrequency respectively

Dene feature vectors formed by the feature subset Fi asx(Fi) e features in F0 form the input vector x(F0) of the

RBM withglia chainVisible

layer

Hiddenlayer

h1

v1

g1

vi

hjgj

vn

hngm

Figure 4 Structure of RBM with glia chain

RBM with glia chain

RBM with glia chain

RBM with glia chain

RBM with glia chain

Hiddenlayers

Visible layer

Figure 3 Structure of deep belief network with glia chains

4 Computational Intelligence and Neuroscience

ensemble deep learning framework which is fed into theinput layer en the input vector x(F0) is split into fivesubvectors x(F1) x(F2) x(F3) x(F4) and x(F5) e fivesubvectors are the input to the corresponding DBN-GCrespectively e five DBN-GC based deep models are builtfor learning the hidden feature abstractions of each raw EEGfeature subset and the hidden feature abstractions are de-scribed as s1(x(F1)) s2(x(F2)) s3(x(F3)) s4(x(F4)) ands5(x(F5)) si(x(Fi)) is the output vector of the last hidden layerof the corresponding DBN-GC i en s1(x(F1)) s2(x(F2))s3(x(F3)) s4(x(F4)) and s5(x(F5)) are merged into a vectorwhich is fed into the discriminative RBM to recognizeemotion states

When building the DBN-GC-based ensemble deeplearning model the five DBN-GCs are trained firstly Anadditional two-neuron output layer that corresponds tobinary emotions is added when training each DBN-GCen the discriminative RBM is built upon the combinedhigher feature abstractions derived from the five DBN-GCsTo determine the DBN-GCsrsquo hyperparameters differentcombinations of hyperparameters are tested and the pa-rameter combination with the minimal recognition error isadopted

3 Results and Discussion

In view of the limited sample size in the dataset cross-validation techniques are adopted in the experiments eensemble deep learning model is trained and tested via 10-fold cross-validation technique with a participant-specificstyle For each of the 32 volunteers the corresponding 40instances are divided into 10 subsets 9 subsets (36 instances)are assigned to the training set and the remaining 1 (4

instances) is assigned to the test set e above process isrepeated 10 times until all subsets are tested

31 Comparison between DNB and DBN-GC In order tostudy the learning performance of DBN-GC we first usethree feature subsets (F6 F7 and F8) to train DBNs andDBN-GCs respectively e three subsets are given as fol-lows F6 F1 F7 F2 cup F3 and F8 F4 cup F5e three featuresubsets represent time domain characteristics frequencydomain characteristics and time-frequency characteristicsrespectively A DBN and a DBN-GC are trained by the samefeature subset and they have the same hyperparameters asshown in Table 2 In addition the parameters of the DBNand the DBN-GC which share the same feature subset suchas learning rate are all set to the same value e six modelsperform the same emotion recognition task and the metricsfor recognition performance adopt accuracy and F1-score

e detailed recognition performance comparisons onarousal and valence dimensions are illustrated in Figure 6

Each column represents the statistical results of 32participants Figures 6(a) and 6(c) show the classificationaccuracy and F1-score of arousal dimension Figures 6(b)and 6(d) show the classification accuracy and F1-score ofvalence dimension As we can see from Figure 6 no matterwhich feature subset is used the DBN-GC model greatlyoutperforms the corresponding baseline DNN model with ahigher median of accuracy and F1-score and a relatively lowstandard deviation In the three DBN-GC models DBN7which is built by the feature subset F7 achieves the highestrecognition accuracy (07242 on arousal and 07310 onvalence) and F1-score (06631 on arousal and 06775 onvalence) e results show that the frequency domain

Input a training sample x1 learning rate ε glia effect vector gOutput weight matrix W bias vector of the visible layer b bias vector of the hidden layer cStart trainingInitialize v(0) v(0) x1 (initialize state value of the visible layer v)for j 1 m (for all hidden units)

p (hj(0) 1 | v(0)) σ (1113944i

Wijv(0)i + cj + cj)

End forExtract a sample h(0) according to h(0)simp (h(0) | v(0))for i 1 n (for all visible units)

p (v(1)i 1 | h(0)) σ (ΣjWij hj(0) + bi)

End forExtract a sample v(1) according to v(1)simp (v(1) | h(0))for j 1 m (for all hidden units calculate the output value without glia effect)hj(1)lowast ΣiWij vi

(1) + cjEnd forupdate the glia effect vector gfor j 1 m (for all hidden units calculate the output value with glia effect)

hj(1) σ (hj

(1)lowast + α middot gj)End for

Update parametersW⟵ W + ε (p (h(0) 1 | v(0))v(0)T minus p(h(1) 1 | v(1))v(1)T)b⟵ b + ε (v(0) minus v(1))c⟵ c + ε (p (h(0) 1 | v(0)) minus p (h(1) 1 | v(1)))

ALGORITHM 1 Pseudocodes for training the RBM with glia chain

Computational Intelligence and Neuroscience 5

characteristics of EEG signals can provide salient in-formation regarding emotion states

e results validate that the glia chain can improve thelearning performance of the deep structurerough the gliachains the hidden layer units in DBN-GC can transferinformation to each other and the DBN-GC model canobtain the correlation information between the same hiddenlayer units us the improved DNB model can learn morediscriminative features For EEG-based emotion recognitiontask the DNN-GC can mine interchannel correlation andutilize interchannel information which is often ignored byother emotion recognition studies

32 Results of the DBN-GC-Based Ensemble Deep LearningModel en the proposed DBN-GC-based ensemble deeplearning model is employed to perform the emotion rec-ognition task Each parallel DBN-GC in the ensemble deeplearning model has 3 hidden layers e numbers of hiddenneuron of each parallel DBN-GC are listed in Table 3rough the ve parallel DBN-GCs the samplesrsquo featuredimensionality is reduced from 664 to 350

Table 4 compares the average recognition performancebetween the proposed DBN-GC-based ensemble deeplearning model and several deep learning-based studies onthe same database Specically Tripathi et al divided the

Input layer Parallel DBN-GCs Discriminative RBM

v1 h1 h2 hn v1

h1

Label

x (F0)

DBN-GC 1

DBN-GC 2

DBN-GC 5

x (F1)

x (F2)

x (F5)

s1 (x(F1))

s2 (x(F2))

s5 (x(F5))

Figure 5 Architecture of DBN-GC-based ensemble deep learning model

Table 2 Hyperparameters of DBNs and DBN-GCs

F6 F7 F8DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC8

v1 128 128 216 216 320 320h1 100 100 200 200 300 300h2 80 80 135 135 280 280h3 45 45 90 90 125 125o1 2 2 2 2 2 2Note v1 represents the input layer o1 represents the output layer and hi represents the i-th hidden layer

6 Computational Intelligence and Neuroscience

8064 readings per channel into 10 batches For each batch 9characteristics such as mean and variance were extracteden a deep neural network and a convolutional neural

network were used for emotion recognition [29] Li et alproposed the frame-based EEG features through wavelet andscalogram transform and designed a hybrid deep learningmodel which combined CNN and RNN [37] In addition Liet al also trained a two-layer DBN to extract high-levelfeatures for each channel and then a SVM with RBF kernelis employed as the classier [38] Wang and Shang presentedthe DBN-based system that extracted features from rawphysiological data and 3 classiers were built to predictemotion states [26] In view of that the above studies did notintroduce F1-score as the metrics for recognition perfor-mance the average recognition performance of the proposedmodel is also compared with that of reference [32] In thisreference Koelstra et al analyzed the central nervous system(CNS) peripheral nervous system (PNS) and multimediacontent analysis features for emotion recognition Consid-ering the proposed DBN-GC-based method in this paper isbased on the EEG signal Table 4 only lists the recognitionresults of the CNS feature-based single modality in reference[32] e DEAP dataset is used in all references in Table 4and the trials are divided into two classes for valence andarousal respectively (ratings divided as more than 5 and lessthan 5) in all references in Table 4

As can be seen from Table 4 the performance of theDBN-GC-based ensemble deep learning model regardingthe recognition accuracy outperforms most of the abovedeep classiers Meanwhile the F1-scores achieved by theproposed model are obviously superior to 05830 and 05630reported by reference [32] e proposed method provides07683 mean recognition accuracy (MRA) on valence whichis lower than the highest MRA reported on valence (08141)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1Ac

cura

cy (a

rous

al)

(a)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

Accu

racy

(val

ence

)

(b)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

F1 sc

ore (

arou

sal)

(c)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

F1 sc

ore (

vale

nce)

(d)

Figure 6 Comparison of emotion recognition results between DBN and DBN-GC

Table 3 Hyperparameters of each parallel DBN-GC in the en-semble deep learning model

h1 h2 h3DBN-GC1 110 105 90DBN-GC2 145 145 120DBN-GC3 55 55 45DBN-GC4 120 55 45DBN-GC5 155 80 50Sum 585 440 350

Table 4 Average recognition performance comparison betweenthe DBN-GC-based ensemble deep learning model and severalreported studies

Arousal Valence

Accuracy F1-score Accuracy F1-

scoreDeep neural network [29] 07313 mdash 07578 mdashConvolutional neuralnetwork [29] 07336 mdash 08141 mdash

CNNRNN [37] 07412 mdash 07206 mdashDBN-SVM [37] 06420 mdash 05840Wang and Shang [26] 05120 mdash 06090 mdashCNS feature-based singlemodality [32] 06200 05830 05760 05630

DBN-GC-based ensembledeep learning model 07592 06931 07683 07015

Computational Intelligence and Neuroscience 7

is may be due to that the CNN model proposed by Tri-pathi et al benets from suiexclcient training and validatinginstances In addition the performance of the ensemble deeplearning model also outperforms the three DBN-GCs inTable 3 which are trained by a single-feature subset isindicates that time domain characteristics frequency do-main characteristics and time-frequency characteristics ofEEG signals should be complementary in emotion recog-nition and the proposed method can integrate dishyerenttypes of characteristics eshyectively

33 Parameter Selection Each parallel DBN-GC in theproposed ensemble deep learning model contains threeimportant parameters the weight coeiexclcient of glia eshyectvalue α the attenuation factor β and the glia threshold θese three parameters will determine the eshyect of glia cellson DBN and then ashyect the performance of the proposedmodel Since there is no self-adaptive adjustment methodthe three parameters are set manually and take 20 values inthe range of 0 to 1 respectively with the interval 005 Underthe dishyerent values of each parameter the MRA on valence

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Weight coefficient of glial effect

ArousalValence

Figure 7 MRA of the proposed ensemble deep learning model with dishyerent values of glia eshyect weight

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Attenuation factor

ArousalValence

Figure 8 MRA of the proposed ensemble deep learning model with dishyerent values of attenuation factor

8 Computational Intelligence and Neuroscience

and the MRA on arousal are taken as the nal evaluationbasis and the analysis results are shown in Figures 7ndash9

As can be seen from Figure 7 when the glia eshyect weightis between 005 and 1 the MRA on arousal as well as theMRA on valence copyuctuates continuously e highest MRAon arousal (7612) is obtained as the glia eshyect weight is setto 075 For MRA on valence the higher values will appearwhen the weight value is close to 015 or 080 Taking intoaccount these two indicators simultaneously it is appro-priate to set the weight coeiexclcient to 080

Figure 8 shows the results of arousal classication andvalence classication with dishyerent values of the attenuationfactor When the value of attenuation factor is between 005and 035 the MRA on arousal as well as the MRA on valencecopyuctuates greatly With the attenuation factor increased to04 both the MRA on arousal and the MRA on valenceincrease rapidly Once the attenuation factor exceeds 05 thetwo MRAs have been decreasing slowly us it is appro-priate to set the attenuation factor to 040 or 050

Figure 9 shows the results of arousal classication andvalence classication with dishyerent values of glia thresholdAlthough the highest value of MRA on arousal occurs withthe glia threshold set to 035 the MRA is more stable whenthe glia threshold is between 065 and 1 For the MRA onvalence its value has been rising slowly when the attenuationfactor exceeds 025 It is appropriate that the glia threshold iswithin the range of 070 to 080

4 Conclusions

In this paper we presented an ensemble deep learning modelwhich integrates parallel DBN-GCs and a discriminativeRBM for emotion recognition e interchannel correlationinformation from multichannel EEG signals which isoften neglected contains salient information regarding to

emotion states and the chain structure of glia cells in DBN-GC has the ability in mining interchannel correlation in-formation In addition the time domain characteristicsfrequency domain characteristics and time-frequencycharacteristics of EEG signals should be complementaryfor emotion recognition and the ensemble deep learningframework benets from the comprehensive fusion ofmultidomain feature abstractions e reliability of theDBN-GC and the ensemble deep learning framework-basedfusion methods is validated by the experiments based onDEAP database

Data Availability

e DEAP dataset used in our manuscript is a dataset foremotion analysis using electroencephalogram (EEG) andphysiological and video signals e DEAP dataset isavailable at httpwwweecsqmulacukmmvdatasetsdeap Anyone interested in using this dataset will have toprint sign and scan an EULA (end-user license agreement)and return it via e-mail en a username and password todownload the data will be provided e dataset was rstpresented in reference [32] DEAP a database for emotionanalysis using physiological signals

Conflicts of Interest

e authors declare that there are no concopyicts of interestregarding the publication of this paper

Acknowledgments

is paper was supported by the China National NatureScience Foundation (Nos 61502150 and 61300124) Foun-dation for University Key Teacher by Henan Province underGrant no 2015GGJS-068 Fundamental Research Funds for

ArousalValence

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Glia threshold

Figure 9 MRA of the proposed ensemble deep learning model with dishyerent values of glia threshold

Computational Intelligence and Neuroscience 9

the Universities of Henan Province under Grant noNSFRF1616 Foundation for Scientific and TechnologicalProject of Henan Province under Grant no 172102210279and Key Scientific Research Projects of Universities inHenan (No 19A520004)

References

[1] M-F Pedro M-P Arquimedes J-C Antoni andJ M B Rubio ldquoEvaluating the research in automatic emotionrecognitionrdquo IETE Technical Review vol 31 no 3 pp 220ndash232 2014

[2] M Chen X He J Yang and H Zhang ldquo3-D convolutionalrecurrent neural networks with attention model for speechemotion recognitionrdquo IEEE Signal Processing Letters vol 25no 10 pp 1440ndash1444 2018

[3] Z-T Liu Q Xie M Wu W-H Cao Y Mei and J-W MaoldquoSpeech emotion recognition based on an improved brainemotion learning modelrdquo Neurocomputing vol 309pp 145ndash156 2018

[4] C-N Anaqnostopoulos T Iliou and I Giannoukos ldquoFea-tures and classifiers for emotion recognition from speech asurvey from 2000 to 2011rdquo Artificial Intelligence Reviewvol 43 no 2 pp 155ndash177 2015

[5] S-E Kahou V Michalski K Konda et al ldquoRecurrent neuralnetworks for emotion recognition in videordquo in Proceedings of2015 ACM International Conference onMultimodal Interactionpp 467ndash474 ACM Seattle WC USA November 2015

[6] Y Huang J Yang P Liao and J Pan ldquoFusion of facial ex-pressions and EEG for multimodal emotion recognitionrdquoComputational Intelligence and Neuroscience vol 2017 Ar-ticle ID 2107451 8 pages 2017

[7] X-W Wang D Nie and B-L Lu ldquoEmotional state classi-fication from EEG data using machine learning approachrdquoNeurocomputing vol 129 pp 94ndash106 2014

[8] H J Yoon and S Y Chung ldquoEEG-based emotion estimationusing Bayesian weighted-log-posterior function and percep-tron convergence algorithmrdquo Computers in Biology andMedicine vol 43 no 12 pp 2230ndash2237 2013

[9] A-N Abeer H Manar A-O Yousef and A-W AreejldquoReview and classification of emotion recognition based onEEG brain-computer interface system research a systematicreviewrdquo Applied Sciences-Basel vol 7 no 12 p 1239 2017

[10] Y Liu and O Sourina ldquoReal-time fractal-based valence levelrecognition from EEGrdquo in Transactions on ComputationalScience XVIII vol 7848 pp101ndash120 Springer BerlinGermany 2013

[11] M Murugappan N Ramachandran and Y Sazali ldquoClassi-fication of human emotion from EEG using discrete wavelettransformrdquo Journal of Biomedical Science and Engineeringvol 3 no 4 pp 390ndash396 2010

[12] T-Y Chai W-S San C-S Tan andM Rizon ldquoClassificationof human emotions from EEG signals using statistical featuresand neural networkrdquo International Journal of IntegratedEngineering vol 1 no 3 pp 1ndash6 2009

[13] C Frantzidis C Bratsas C Papadelis E KonstantinidisC Pappas and P D Bamidis ldquoToward emotion awarecomputing an integrated approach using multichannelneurophysiological recordings and affective visual stimulirdquoIEEE Transactions on Information Technology in Biomedicinevol 14 no 3 pp 589ndash597 2010

[14] B Hjorth ldquoEEG analysis based on time domain propertiesrdquoElectroencephalography and Clinical Neurophysiology vol 29no 3 pp 306ndash310 1970

[15] K Ansari-asl G Chanel and T Pun ldquoA channel selectionmethod for EEG classification in emotion assessment basedon synchronization likelihoodrdquo in Proceedings of 15th Eu-ropean Signal Processing Conference pp 1241ndash1245 SpringerPoznan Poland September 2007

[16] E Kroupi A Yazdani and T Ebrahimi ldquoEEG correlates ofdifferent emotional states elicited during watching musicvideosrdquo in Proceedings of 4th International Conference onAffective Computing and Intelligent Interaction pp 457ndash466Springer Memphis TN USA October 2011

[17] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from EEG using higher order crossingsrdquo IEEETransactions on Information Technology in Biomedicinevol 14 no 2 pp 186ndash197 2010

[18] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from brain signals using hybrid adaptive filtering andhigher order crossings analysisrdquo IEEE Transactions on Af-fective Computing vol 1 no 2 pp 81ndash97 2010

[19] G K Verma and U S Tiwary ldquoMultimodal fusion frame-work a multiresolution approach for emotion classificationand recognition from physiological signalsrdquo NeuroImagevol 102 pp 162ndash172 2014

[20] S K Hadjidimitriou and L J Hadjileontiadis ldquoToward anEEG-based recognition of music liking using time-frequencyanalysisrdquo IEEE Transactions on Biomedical Engineeringvol 59 no 12 pp 3498ndash3510 2012

[21] M Akin ldquoComparison of wavelet transform and FFTmethodsin the analysis of EEG signalsrdquo Journal of Medical Systemsvol 26 no 3 pp 241ndash247 2002

[22] M Murugappan M Rizon S Yaacob et al ldquoEEG featureextraction for classifying emotions using FCM and FKMrdquoInternational Journal of Computers and Communicationsvol 1 no 2 pp 21ndash25 2007

[23] A Samara M-L-R Menezes and L Galway ldquoFeature ex-traction for emotion recognition and modeling using neu-rophysiological datardquo in Proceedings of 15th InternationalConference on Ubiquitous Computing and Communicationsand 2016 International Symposium on Cyberspace and Security(IUCC-CSS) pp 138ndash144 IEEE Granada Spain December2016

[24] N Jadhav R Manthalkar and Y Joshi ldquoElectroencephalography-Based emotion recognition using gray-level co-occurrence matrix featuresrdquo in Proceedings of InternationalConference on Computer Vision and Image Processingpp 335ndash343 Springer Roorkee Uttarakhand December2016

[25] N ammasan K Moriyama K-i Fukui and M NumaoldquoFamiliarity effects in EEG-based emotion recognitionrdquo BrainInformatics vol 4 no 1 pp 39ndash50 2016

[26] D Wang and Y Shang ldquoModeling physiological data withdeep belief networksrdquo International Journal of Informationand Education Technology vol 3 no 5 pp 505ndash511 2013

[27] H Xu andK-N Plataniotis ldquoAffective states classification usingEEG and semi-supervised deep learning approachesrdquo in Pro-ceedings of IEEE 18th International Workshop on MultimediaSignal pp 1ndash6 IEEE Montreal Canada September 2016

[28] X Li D Song P Zhang Y Hou and B Hu ldquoDeep fusion ofmulti-channel neurophysiological signal for emotion recog-nition and monitoringrdquo International Journal of Data Miningand Bioinformatics vol 18 no 1 pp 1ndash27 2017

[29] S Tripathi S Acharya R-D Sharma et al ldquoUsing deep andconvolutional neural networks for accurate emotion classi-fication on DEAP datasetrdquo in Proceedings of Twenty-NinthAAAI Conference on Innovative Applications of Artificial

10 Computational Intelligence and Neuroscience

Intelligence pp 4746ndash4752 AAAI San Francisco CA USAFebruary 2017

[30] C Ikuta Y Uwate and Y Nishio ldquoMulti-layer perceptronwith positive and negative pulse glia chain for solving two-spirals problemrdquo in Proceedings of 2012 International JointConference on Neural Networks pp 1ndash6 IEEE BrisbaneAustralia June 2012

[31] Z-Q Geng and Y-K Zhang ldquoAn improved deep beliefnetwork inspired by glia chainsrdquo Acta Automatica Sinicavol 42 no 6 pp 943ndash952 2016

[32] S Koelstra C Muehl M Soleymani et al ldquoDEAP a databasefor emotion analysis using physiological signalsrdquo IEEETransaction on Affective Computing vol 3 no 1 pp 18ndash312012

[33] M Khezri M Firoozabadi and A R Sharafat ldquoReliableemotion recognition system based on dynamic adaptive fu-sion of forehead biopotentials and physiological signalsrdquoComputer Methods and Programs in Biomedicine vol 122no 2 pp 149ndash164 2015

[34] J Atkinson and D Campos ldquoImproving BCI-based emotionrecognition by combining EEG feature selection and kernelclassifiersrdquo Expert Systems with Applications vol 47pp 35ndash41 2016

[35] C Li C Xu and Z Feng ldquoAnalysis of physiological foremotion recognition with the IRS modelrdquo Neurocomputingvol 178 pp 103ndash111 2016

[36] Z Yin M-Y Zhao Y-X Wang J Yang and J ZhangldquoRecognition of emotions using multimodal physiologicalsignals and an ensemble deep learning modelrdquo ComputerMethods and Programs in Biomedicine vol 140 pp 93ndash1102017

[37] X Li D-W Song P Zhang et al ldquoEmotion recognition frommulti-channel EEG data through convolutional recurrentneural networkrdquo in Proceedings of IEEE International Con-ference on Bioinformatics and Biomedicine pp 352ndash359 IEEEShenzhen China December 2016

[38] X Li P Zhang and D-W Song ldquoEEG based emotionidentification using unsupervised deep feature learningrdquo inProceedings of SIGIR2015 Workshop on Neuro-PhysiologicalMethods in IR Research August 2015

Computational Intelligence and Neuroscience 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 4: RecognitionofEmotionsUsingMultichannelEEGDataand DBN-GC …downloads.hindawi.com/journals/cin/2018/9750904.pdf · 2019-07-30 · types of deep learning approaches, stacked denoising

For example if the output of a hidden unit h1 is higherthan the prespecied threshold the corresponding glia cellg1 will be activated and then a signal is transmitted to theglia cell g2 When the signal is passed to g2 the glia cell g2will be activated no matter whether or not the output of thehidden unit h2 reached the prespecied threshold en gliacell g2 will produce the second signal to spread Meanwhilethe signal generated by g1 will continue to spread In orderto simplify the calculation all signals produced by glia cellsare propagated along the specic direction of the glia chainat is the signals are transmitted from the rst glia cell onthe chain to the last

For RBM with a glia chain the output rule of hiddenunits is updated as follows

hj σ hlowastj + α middot gj( ) (1)

where hlowastj is the output value of the hidden node j before theoutput rule is updated gj is the glia eshyect value of thecorresponding glia cell α is the weight coeiexclcient of gliaeshyect value and σ() is the sigmoid function e weightcoeiexclcient α is set manually which can control the eshyect ofglia eshyect on the hidden units hlowastj can be calculated as

hlowastj sumi

Wijvi + cj (2)

where Wij is the connection weight of the visual unit i andthe hidden unit j vi is the state value of the visible unit i andcj is the bias value of the hidden unit j Instead of randomsampling activation probability is employed as output foreach hidden unit which can reduce sampling noise andspeed up learning

e glia eshyect value of glia cell gj is dened as

gj(t) 1 hlowastj gt θcupgjminus1(tminus 1) 1( )cap tj ltT( )βgj(tminus 1) others

(3)

where θ is the prespecied threshold T is an unresponsivetime threshold after activation and β represents the at-tenuation factor Every time the signal produced by anactivated glia cell is passed to the next glia celle activationof a glia cell will depend on whether the output of thecorresponding hidden unit reaches the prespeciedthreshold θ or whether the previous glia cell conveys a signalto it Meanwhile the dishyerence tj between its last activationtime and the current time must be less than T If the glia cellis activated it will transmit a signal to the next glia cellotherwise it will not produce signals and its glia eshyect willgradually decay

After integrating the glia cell mechanism the learningalgorithm of RBM is improved and the pseudocodes of thelearning algorithm are listed in Algorithm 1

e training process of a DBN-GC which is similar tothat of DBN consists of 2 steps pretraining and ne-tuningGlia cell mechanism only acts on the pretraining process Inthe pretraining phase a greedy layer-wise unsupervisedmethod is adopted to train each RBM and the hidden layerrsquosoutput of the previous RBM is used as the visible layerrsquosinput of the next RBM In the ne-tuning phase backpropagation is performed to ne-tune the parameters of theDNB-GC

232 DBN-GC-Based Ensemble Deep Learning ModelConsidering that the raw EEG features in Table 1 may sharedishyerent hidden properties across dishyerent time domain andfrequency domain modalities we proposed a DBN-GC-based ensemble deep learning model which implements aDBN-GC-based network on homogenous feature subsetindependently e feature vectors derived from dishyerentfeature subsets are fed into the corresponding DBN-GCrespectively and the higher feature abstractions of eachfeature subset are obtained as the outputs of the last hiddenlayer in the corresponding DBN-GC en a discriminativeRBM is built upon the combined higher feature abstractionse network architecture is illustrated in Figure 5 eensemble deep learning model consists of three parts theinput layer ve parallel DBN-GCs and a discriminativeRBM

e overall raw EEG feature set in Table 1 can be denedas F0 which is split into ve nonoverlapped physiologicalfeature subsets F1 F2 F3 F4 and F5 e statistical measuresfrom time domain construct subset F1 and the multichannelEEG PSDs construct subset F2 Another subset F3 is built byEEG power dishyerences In view of the heterogeneity ofmultichannel HHS features in time domain and frequencydomain the HHS features can be grouped into two subsetsF4 and F5 indicating squared amplitude and instantaneousfrequency respectively

Dene feature vectors formed by the feature subset Fi asx(Fi) e features in F0 form the input vector x(F0) of the

RBM withglia chainVisible

layer

Hiddenlayer

h1

v1

g1

vi

hjgj

vn

hngm

Figure 4 Structure of RBM with glia chain

RBM with glia chain

RBM with glia chain

RBM with glia chain

RBM with glia chain

Hiddenlayers

Visible layer

Figure 3 Structure of deep belief network with glia chains

4 Computational Intelligence and Neuroscience

ensemble deep learning framework which is fed into theinput layer en the input vector x(F0) is split into fivesubvectors x(F1) x(F2) x(F3) x(F4) and x(F5) e fivesubvectors are the input to the corresponding DBN-GCrespectively e five DBN-GC based deep models are builtfor learning the hidden feature abstractions of each raw EEGfeature subset and the hidden feature abstractions are de-scribed as s1(x(F1)) s2(x(F2)) s3(x(F3)) s4(x(F4)) ands5(x(F5)) si(x(Fi)) is the output vector of the last hidden layerof the corresponding DBN-GC i en s1(x(F1)) s2(x(F2))s3(x(F3)) s4(x(F4)) and s5(x(F5)) are merged into a vectorwhich is fed into the discriminative RBM to recognizeemotion states

When building the DBN-GC-based ensemble deeplearning model the five DBN-GCs are trained firstly Anadditional two-neuron output layer that corresponds tobinary emotions is added when training each DBN-GCen the discriminative RBM is built upon the combinedhigher feature abstractions derived from the five DBN-GCsTo determine the DBN-GCsrsquo hyperparameters differentcombinations of hyperparameters are tested and the pa-rameter combination with the minimal recognition error isadopted

3 Results and Discussion

In view of the limited sample size in the dataset cross-validation techniques are adopted in the experiments eensemble deep learning model is trained and tested via 10-fold cross-validation technique with a participant-specificstyle For each of the 32 volunteers the corresponding 40instances are divided into 10 subsets 9 subsets (36 instances)are assigned to the training set and the remaining 1 (4

instances) is assigned to the test set e above process isrepeated 10 times until all subsets are tested

31 Comparison between DNB and DBN-GC In order tostudy the learning performance of DBN-GC we first usethree feature subsets (F6 F7 and F8) to train DBNs andDBN-GCs respectively e three subsets are given as fol-lows F6 F1 F7 F2 cup F3 and F8 F4 cup F5e three featuresubsets represent time domain characteristics frequencydomain characteristics and time-frequency characteristicsrespectively A DBN and a DBN-GC are trained by the samefeature subset and they have the same hyperparameters asshown in Table 2 In addition the parameters of the DBNand the DBN-GC which share the same feature subset suchas learning rate are all set to the same value e six modelsperform the same emotion recognition task and the metricsfor recognition performance adopt accuracy and F1-score

e detailed recognition performance comparisons onarousal and valence dimensions are illustrated in Figure 6

Each column represents the statistical results of 32participants Figures 6(a) and 6(c) show the classificationaccuracy and F1-score of arousal dimension Figures 6(b)and 6(d) show the classification accuracy and F1-score ofvalence dimension As we can see from Figure 6 no matterwhich feature subset is used the DBN-GC model greatlyoutperforms the corresponding baseline DNN model with ahigher median of accuracy and F1-score and a relatively lowstandard deviation In the three DBN-GC models DBN7which is built by the feature subset F7 achieves the highestrecognition accuracy (07242 on arousal and 07310 onvalence) and F1-score (06631 on arousal and 06775 onvalence) e results show that the frequency domain

Input a training sample x1 learning rate ε glia effect vector gOutput weight matrix W bias vector of the visible layer b bias vector of the hidden layer cStart trainingInitialize v(0) v(0) x1 (initialize state value of the visible layer v)for j 1 m (for all hidden units)

p (hj(0) 1 | v(0)) σ (1113944i

Wijv(0)i + cj + cj)

End forExtract a sample h(0) according to h(0)simp (h(0) | v(0))for i 1 n (for all visible units)

p (v(1)i 1 | h(0)) σ (ΣjWij hj(0) + bi)

End forExtract a sample v(1) according to v(1)simp (v(1) | h(0))for j 1 m (for all hidden units calculate the output value without glia effect)hj(1)lowast ΣiWij vi

(1) + cjEnd forupdate the glia effect vector gfor j 1 m (for all hidden units calculate the output value with glia effect)

hj(1) σ (hj

(1)lowast + α middot gj)End for

Update parametersW⟵ W + ε (p (h(0) 1 | v(0))v(0)T minus p(h(1) 1 | v(1))v(1)T)b⟵ b + ε (v(0) minus v(1))c⟵ c + ε (p (h(0) 1 | v(0)) minus p (h(1) 1 | v(1)))

ALGORITHM 1 Pseudocodes for training the RBM with glia chain

Computational Intelligence and Neuroscience 5

characteristics of EEG signals can provide salient in-formation regarding emotion states

e results validate that the glia chain can improve thelearning performance of the deep structurerough the gliachains the hidden layer units in DBN-GC can transferinformation to each other and the DBN-GC model canobtain the correlation information between the same hiddenlayer units us the improved DNB model can learn morediscriminative features For EEG-based emotion recognitiontask the DNN-GC can mine interchannel correlation andutilize interchannel information which is often ignored byother emotion recognition studies

32 Results of the DBN-GC-Based Ensemble Deep LearningModel en the proposed DBN-GC-based ensemble deeplearning model is employed to perform the emotion rec-ognition task Each parallel DBN-GC in the ensemble deeplearning model has 3 hidden layers e numbers of hiddenneuron of each parallel DBN-GC are listed in Table 3rough the ve parallel DBN-GCs the samplesrsquo featuredimensionality is reduced from 664 to 350

Table 4 compares the average recognition performancebetween the proposed DBN-GC-based ensemble deeplearning model and several deep learning-based studies onthe same database Specically Tripathi et al divided the

Input layer Parallel DBN-GCs Discriminative RBM

v1 h1 h2 hn v1

h1

Label

x (F0)

DBN-GC 1

DBN-GC 2

DBN-GC 5

x (F1)

x (F2)

x (F5)

s1 (x(F1))

s2 (x(F2))

s5 (x(F5))

Figure 5 Architecture of DBN-GC-based ensemble deep learning model

Table 2 Hyperparameters of DBNs and DBN-GCs

F6 F7 F8DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC8

v1 128 128 216 216 320 320h1 100 100 200 200 300 300h2 80 80 135 135 280 280h3 45 45 90 90 125 125o1 2 2 2 2 2 2Note v1 represents the input layer o1 represents the output layer and hi represents the i-th hidden layer

6 Computational Intelligence and Neuroscience

8064 readings per channel into 10 batches For each batch 9characteristics such as mean and variance were extracteden a deep neural network and a convolutional neural

network were used for emotion recognition [29] Li et alproposed the frame-based EEG features through wavelet andscalogram transform and designed a hybrid deep learningmodel which combined CNN and RNN [37] In addition Liet al also trained a two-layer DBN to extract high-levelfeatures for each channel and then a SVM with RBF kernelis employed as the classier [38] Wang and Shang presentedthe DBN-based system that extracted features from rawphysiological data and 3 classiers were built to predictemotion states [26] In view of that the above studies did notintroduce F1-score as the metrics for recognition perfor-mance the average recognition performance of the proposedmodel is also compared with that of reference [32] In thisreference Koelstra et al analyzed the central nervous system(CNS) peripheral nervous system (PNS) and multimediacontent analysis features for emotion recognition Consid-ering the proposed DBN-GC-based method in this paper isbased on the EEG signal Table 4 only lists the recognitionresults of the CNS feature-based single modality in reference[32] e DEAP dataset is used in all references in Table 4and the trials are divided into two classes for valence andarousal respectively (ratings divided as more than 5 and lessthan 5) in all references in Table 4

As can be seen from Table 4 the performance of theDBN-GC-based ensemble deep learning model regardingthe recognition accuracy outperforms most of the abovedeep classiers Meanwhile the F1-scores achieved by theproposed model are obviously superior to 05830 and 05630reported by reference [32] e proposed method provides07683 mean recognition accuracy (MRA) on valence whichis lower than the highest MRA reported on valence (08141)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1Ac

cura

cy (a

rous

al)

(a)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

Accu

racy

(val

ence

)

(b)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

F1 sc

ore (

arou

sal)

(c)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

F1 sc

ore (

vale

nce)

(d)

Figure 6 Comparison of emotion recognition results between DBN and DBN-GC

Table 3 Hyperparameters of each parallel DBN-GC in the en-semble deep learning model

h1 h2 h3DBN-GC1 110 105 90DBN-GC2 145 145 120DBN-GC3 55 55 45DBN-GC4 120 55 45DBN-GC5 155 80 50Sum 585 440 350

Table 4 Average recognition performance comparison betweenthe DBN-GC-based ensemble deep learning model and severalreported studies

Arousal Valence

Accuracy F1-score Accuracy F1-

scoreDeep neural network [29] 07313 mdash 07578 mdashConvolutional neuralnetwork [29] 07336 mdash 08141 mdash

CNNRNN [37] 07412 mdash 07206 mdashDBN-SVM [37] 06420 mdash 05840Wang and Shang [26] 05120 mdash 06090 mdashCNS feature-based singlemodality [32] 06200 05830 05760 05630

DBN-GC-based ensembledeep learning model 07592 06931 07683 07015

Computational Intelligence and Neuroscience 7

is may be due to that the CNN model proposed by Tri-pathi et al benets from suiexclcient training and validatinginstances In addition the performance of the ensemble deeplearning model also outperforms the three DBN-GCs inTable 3 which are trained by a single-feature subset isindicates that time domain characteristics frequency do-main characteristics and time-frequency characteristics ofEEG signals should be complementary in emotion recog-nition and the proposed method can integrate dishyerenttypes of characteristics eshyectively

33 Parameter Selection Each parallel DBN-GC in theproposed ensemble deep learning model contains threeimportant parameters the weight coeiexclcient of glia eshyectvalue α the attenuation factor β and the glia threshold θese three parameters will determine the eshyect of glia cellson DBN and then ashyect the performance of the proposedmodel Since there is no self-adaptive adjustment methodthe three parameters are set manually and take 20 values inthe range of 0 to 1 respectively with the interval 005 Underthe dishyerent values of each parameter the MRA on valence

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Weight coefficient of glial effect

ArousalValence

Figure 7 MRA of the proposed ensemble deep learning model with dishyerent values of glia eshyect weight

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Attenuation factor

ArousalValence

Figure 8 MRA of the proposed ensemble deep learning model with dishyerent values of attenuation factor

8 Computational Intelligence and Neuroscience

and the MRA on arousal are taken as the nal evaluationbasis and the analysis results are shown in Figures 7ndash9

As can be seen from Figure 7 when the glia eshyect weightis between 005 and 1 the MRA on arousal as well as theMRA on valence copyuctuates continuously e highest MRAon arousal (7612) is obtained as the glia eshyect weight is setto 075 For MRA on valence the higher values will appearwhen the weight value is close to 015 or 080 Taking intoaccount these two indicators simultaneously it is appro-priate to set the weight coeiexclcient to 080

Figure 8 shows the results of arousal classication andvalence classication with dishyerent values of the attenuationfactor When the value of attenuation factor is between 005and 035 the MRA on arousal as well as the MRA on valencecopyuctuates greatly With the attenuation factor increased to04 both the MRA on arousal and the MRA on valenceincrease rapidly Once the attenuation factor exceeds 05 thetwo MRAs have been decreasing slowly us it is appro-priate to set the attenuation factor to 040 or 050

Figure 9 shows the results of arousal classication andvalence classication with dishyerent values of glia thresholdAlthough the highest value of MRA on arousal occurs withthe glia threshold set to 035 the MRA is more stable whenthe glia threshold is between 065 and 1 For the MRA onvalence its value has been rising slowly when the attenuationfactor exceeds 025 It is appropriate that the glia threshold iswithin the range of 070 to 080

4 Conclusions

In this paper we presented an ensemble deep learning modelwhich integrates parallel DBN-GCs and a discriminativeRBM for emotion recognition e interchannel correlationinformation from multichannel EEG signals which isoften neglected contains salient information regarding to

emotion states and the chain structure of glia cells in DBN-GC has the ability in mining interchannel correlation in-formation In addition the time domain characteristicsfrequency domain characteristics and time-frequencycharacteristics of EEG signals should be complementaryfor emotion recognition and the ensemble deep learningframework benets from the comprehensive fusion ofmultidomain feature abstractions e reliability of theDBN-GC and the ensemble deep learning framework-basedfusion methods is validated by the experiments based onDEAP database

Data Availability

e DEAP dataset used in our manuscript is a dataset foremotion analysis using electroencephalogram (EEG) andphysiological and video signals e DEAP dataset isavailable at httpwwweecsqmulacukmmvdatasetsdeap Anyone interested in using this dataset will have toprint sign and scan an EULA (end-user license agreement)and return it via e-mail en a username and password todownload the data will be provided e dataset was rstpresented in reference [32] DEAP a database for emotionanalysis using physiological signals

Conflicts of Interest

e authors declare that there are no concopyicts of interestregarding the publication of this paper

Acknowledgments

is paper was supported by the China National NatureScience Foundation (Nos 61502150 and 61300124) Foun-dation for University Key Teacher by Henan Province underGrant no 2015GGJS-068 Fundamental Research Funds for

ArousalValence

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Glia threshold

Figure 9 MRA of the proposed ensemble deep learning model with dishyerent values of glia threshold

Computational Intelligence and Neuroscience 9

the Universities of Henan Province under Grant noNSFRF1616 Foundation for Scientific and TechnologicalProject of Henan Province under Grant no 172102210279and Key Scientific Research Projects of Universities inHenan (No 19A520004)

References

[1] M-F Pedro M-P Arquimedes J-C Antoni andJ M B Rubio ldquoEvaluating the research in automatic emotionrecognitionrdquo IETE Technical Review vol 31 no 3 pp 220ndash232 2014

[2] M Chen X He J Yang and H Zhang ldquo3-D convolutionalrecurrent neural networks with attention model for speechemotion recognitionrdquo IEEE Signal Processing Letters vol 25no 10 pp 1440ndash1444 2018

[3] Z-T Liu Q Xie M Wu W-H Cao Y Mei and J-W MaoldquoSpeech emotion recognition based on an improved brainemotion learning modelrdquo Neurocomputing vol 309pp 145ndash156 2018

[4] C-N Anaqnostopoulos T Iliou and I Giannoukos ldquoFea-tures and classifiers for emotion recognition from speech asurvey from 2000 to 2011rdquo Artificial Intelligence Reviewvol 43 no 2 pp 155ndash177 2015

[5] S-E Kahou V Michalski K Konda et al ldquoRecurrent neuralnetworks for emotion recognition in videordquo in Proceedings of2015 ACM International Conference onMultimodal Interactionpp 467ndash474 ACM Seattle WC USA November 2015

[6] Y Huang J Yang P Liao and J Pan ldquoFusion of facial ex-pressions and EEG for multimodal emotion recognitionrdquoComputational Intelligence and Neuroscience vol 2017 Ar-ticle ID 2107451 8 pages 2017

[7] X-W Wang D Nie and B-L Lu ldquoEmotional state classi-fication from EEG data using machine learning approachrdquoNeurocomputing vol 129 pp 94ndash106 2014

[8] H J Yoon and S Y Chung ldquoEEG-based emotion estimationusing Bayesian weighted-log-posterior function and percep-tron convergence algorithmrdquo Computers in Biology andMedicine vol 43 no 12 pp 2230ndash2237 2013

[9] A-N Abeer H Manar A-O Yousef and A-W AreejldquoReview and classification of emotion recognition based onEEG brain-computer interface system research a systematicreviewrdquo Applied Sciences-Basel vol 7 no 12 p 1239 2017

[10] Y Liu and O Sourina ldquoReal-time fractal-based valence levelrecognition from EEGrdquo in Transactions on ComputationalScience XVIII vol 7848 pp101ndash120 Springer BerlinGermany 2013

[11] M Murugappan N Ramachandran and Y Sazali ldquoClassi-fication of human emotion from EEG using discrete wavelettransformrdquo Journal of Biomedical Science and Engineeringvol 3 no 4 pp 390ndash396 2010

[12] T-Y Chai W-S San C-S Tan andM Rizon ldquoClassificationof human emotions from EEG signals using statistical featuresand neural networkrdquo International Journal of IntegratedEngineering vol 1 no 3 pp 1ndash6 2009

[13] C Frantzidis C Bratsas C Papadelis E KonstantinidisC Pappas and P D Bamidis ldquoToward emotion awarecomputing an integrated approach using multichannelneurophysiological recordings and affective visual stimulirdquoIEEE Transactions on Information Technology in Biomedicinevol 14 no 3 pp 589ndash597 2010

[14] B Hjorth ldquoEEG analysis based on time domain propertiesrdquoElectroencephalography and Clinical Neurophysiology vol 29no 3 pp 306ndash310 1970

[15] K Ansari-asl G Chanel and T Pun ldquoA channel selectionmethod for EEG classification in emotion assessment basedon synchronization likelihoodrdquo in Proceedings of 15th Eu-ropean Signal Processing Conference pp 1241ndash1245 SpringerPoznan Poland September 2007

[16] E Kroupi A Yazdani and T Ebrahimi ldquoEEG correlates ofdifferent emotional states elicited during watching musicvideosrdquo in Proceedings of 4th International Conference onAffective Computing and Intelligent Interaction pp 457ndash466Springer Memphis TN USA October 2011

[17] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from EEG using higher order crossingsrdquo IEEETransactions on Information Technology in Biomedicinevol 14 no 2 pp 186ndash197 2010

[18] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from brain signals using hybrid adaptive filtering andhigher order crossings analysisrdquo IEEE Transactions on Af-fective Computing vol 1 no 2 pp 81ndash97 2010

[19] G K Verma and U S Tiwary ldquoMultimodal fusion frame-work a multiresolution approach for emotion classificationand recognition from physiological signalsrdquo NeuroImagevol 102 pp 162ndash172 2014

[20] S K Hadjidimitriou and L J Hadjileontiadis ldquoToward anEEG-based recognition of music liking using time-frequencyanalysisrdquo IEEE Transactions on Biomedical Engineeringvol 59 no 12 pp 3498ndash3510 2012

[21] M Akin ldquoComparison of wavelet transform and FFTmethodsin the analysis of EEG signalsrdquo Journal of Medical Systemsvol 26 no 3 pp 241ndash247 2002

[22] M Murugappan M Rizon S Yaacob et al ldquoEEG featureextraction for classifying emotions using FCM and FKMrdquoInternational Journal of Computers and Communicationsvol 1 no 2 pp 21ndash25 2007

[23] A Samara M-L-R Menezes and L Galway ldquoFeature ex-traction for emotion recognition and modeling using neu-rophysiological datardquo in Proceedings of 15th InternationalConference on Ubiquitous Computing and Communicationsand 2016 International Symposium on Cyberspace and Security(IUCC-CSS) pp 138ndash144 IEEE Granada Spain December2016

[24] N Jadhav R Manthalkar and Y Joshi ldquoElectroencephalography-Based emotion recognition using gray-level co-occurrence matrix featuresrdquo in Proceedings of InternationalConference on Computer Vision and Image Processingpp 335ndash343 Springer Roorkee Uttarakhand December2016

[25] N ammasan K Moriyama K-i Fukui and M NumaoldquoFamiliarity effects in EEG-based emotion recognitionrdquo BrainInformatics vol 4 no 1 pp 39ndash50 2016

[26] D Wang and Y Shang ldquoModeling physiological data withdeep belief networksrdquo International Journal of Informationand Education Technology vol 3 no 5 pp 505ndash511 2013

[27] H Xu andK-N Plataniotis ldquoAffective states classification usingEEG and semi-supervised deep learning approachesrdquo in Pro-ceedings of IEEE 18th International Workshop on MultimediaSignal pp 1ndash6 IEEE Montreal Canada September 2016

[28] X Li D Song P Zhang Y Hou and B Hu ldquoDeep fusion ofmulti-channel neurophysiological signal for emotion recog-nition and monitoringrdquo International Journal of Data Miningand Bioinformatics vol 18 no 1 pp 1ndash27 2017

[29] S Tripathi S Acharya R-D Sharma et al ldquoUsing deep andconvolutional neural networks for accurate emotion classi-fication on DEAP datasetrdquo in Proceedings of Twenty-NinthAAAI Conference on Innovative Applications of Artificial

10 Computational Intelligence and Neuroscience

Intelligence pp 4746ndash4752 AAAI San Francisco CA USAFebruary 2017

[30] C Ikuta Y Uwate and Y Nishio ldquoMulti-layer perceptronwith positive and negative pulse glia chain for solving two-spirals problemrdquo in Proceedings of 2012 International JointConference on Neural Networks pp 1ndash6 IEEE BrisbaneAustralia June 2012

[31] Z-Q Geng and Y-K Zhang ldquoAn improved deep beliefnetwork inspired by glia chainsrdquo Acta Automatica Sinicavol 42 no 6 pp 943ndash952 2016

[32] S Koelstra C Muehl M Soleymani et al ldquoDEAP a databasefor emotion analysis using physiological signalsrdquo IEEETransaction on Affective Computing vol 3 no 1 pp 18ndash312012

[33] M Khezri M Firoozabadi and A R Sharafat ldquoReliableemotion recognition system based on dynamic adaptive fu-sion of forehead biopotentials and physiological signalsrdquoComputer Methods and Programs in Biomedicine vol 122no 2 pp 149ndash164 2015

[34] J Atkinson and D Campos ldquoImproving BCI-based emotionrecognition by combining EEG feature selection and kernelclassifiersrdquo Expert Systems with Applications vol 47pp 35ndash41 2016

[35] C Li C Xu and Z Feng ldquoAnalysis of physiological foremotion recognition with the IRS modelrdquo Neurocomputingvol 178 pp 103ndash111 2016

[36] Z Yin M-Y Zhao Y-X Wang J Yang and J ZhangldquoRecognition of emotions using multimodal physiologicalsignals and an ensemble deep learning modelrdquo ComputerMethods and Programs in Biomedicine vol 140 pp 93ndash1102017

[37] X Li D-W Song P Zhang et al ldquoEmotion recognition frommulti-channel EEG data through convolutional recurrentneural networkrdquo in Proceedings of IEEE International Con-ference on Bioinformatics and Biomedicine pp 352ndash359 IEEEShenzhen China December 2016

[38] X Li P Zhang and D-W Song ldquoEEG based emotionidentification using unsupervised deep feature learningrdquo inProceedings of SIGIR2015 Workshop on Neuro-PhysiologicalMethods in IR Research August 2015

Computational Intelligence and Neuroscience 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 5: RecognitionofEmotionsUsingMultichannelEEGDataand DBN-GC …downloads.hindawi.com/journals/cin/2018/9750904.pdf · 2019-07-30 · types of deep learning approaches, stacked denoising

ensemble deep learning framework which is fed into theinput layer en the input vector x(F0) is split into fivesubvectors x(F1) x(F2) x(F3) x(F4) and x(F5) e fivesubvectors are the input to the corresponding DBN-GCrespectively e five DBN-GC based deep models are builtfor learning the hidden feature abstractions of each raw EEGfeature subset and the hidden feature abstractions are de-scribed as s1(x(F1)) s2(x(F2)) s3(x(F3)) s4(x(F4)) ands5(x(F5)) si(x(Fi)) is the output vector of the last hidden layerof the corresponding DBN-GC i en s1(x(F1)) s2(x(F2))s3(x(F3)) s4(x(F4)) and s5(x(F5)) are merged into a vectorwhich is fed into the discriminative RBM to recognizeemotion states

When building the DBN-GC-based ensemble deeplearning model the five DBN-GCs are trained firstly Anadditional two-neuron output layer that corresponds tobinary emotions is added when training each DBN-GCen the discriminative RBM is built upon the combinedhigher feature abstractions derived from the five DBN-GCsTo determine the DBN-GCsrsquo hyperparameters differentcombinations of hyperparameters are tested and the pa-rameter combination with the minimal recognition error isadopted

3 Results and Discussion

In view of the limited sample size in the dataset cross-validation techniques are adopted in the experiments eensemble deep learning model is trained and tested via 10-fold cross-validation technique with a participant-specificstyle For each of the 32 volunteers the corresponding 40instances are divided into 10 subsets 9 subsets (36 instances)are assigned to the training set and the remaining 1 (4

instances) is assigned to the test set e above process isrepeated 10 times until all subsets are tested

31 Comparison between DNB and DBN-GC In order tostudy the learning performance of DBN-GC we first usethree feature subsets (F6 F7 and F8) to train DBNs andDBN-GCs respectively e three subsets are given as fol-lows F6 F1 F7 F2 cup F3 and F8 F4 cup F5e three featuresubsets represent time domain characteristics frequencydomain characteristics and time-frequency characteristicsrespectively A DBN and a DBN-GC are trained by the samefeature subset and they have the same hyperparameters asshown in Table 2 In addition the parameters of the DBNand the DBN-GC which share the same feature subset suchas learning rate are all set to the same value e six modelsperform the same emotion recognition task and the metricsfor recognition performance adopt accuracy and F1-score

e detailed recognition performance comparisons onarousal and valence dimensions are illustrated in Figure 6

Each column represents the statistical results of 32participants Figures 6(a) and 6(c) show the classificationaccuracy and F1-score of arousal dimension Figures 6(b)and 6(d) show the classification accuracy and F1-score ofvalence dimension As we can see from Figure 6 no matterwhich feature subset is used the DBN-GC model greatlyoutperforms the corresponding baseline DNN model with ahigher median of accuracy and F1-score and a relatively lowstandard deviation In the three DBN-GC models DBN7which is built by the feature subset F7 achieves the highestrecognition accuracy (07242 on arousal and 07310 onvalence) and F1-score (06631 on arousal and 06775 onvalence) e results show that the frequency domain

Input a training sample x1 learning rate ε glia effect vector gOutput weight matrix W bias vector of the visible layer b bias vector of the hidden layer cStart trainingInitialize v(0) v(0) x1 (initialize state value of the visible layer v)for j 1 m (for all hidden units)

p (hj(0) 1 | v(0)) σ (1113944i

Wijv(0)i + cj + cj)

End forExtract a sample h(0) according to h(0)simp (h(0) | v(0))for i 1 n (for all visible units)

p (v(1)i 1 | h(0)) σ (ΣjWij hj(0) + bi)

End forExtract a sample v(1) according to v(1)simp (v(1) | h(0))for j 1 m (for all hidden units calculate the output value without glia effect)hj(1)lowast ΣiWij vi

(1) + cjEnd forupdate the glia effect vector gfor j 1 m (for all hidden units calculate the output value with glia effect)

hj(1) σ (hj

(1)lowast + α middot gj)End for

Update parametersW⟵ W + ε (p (h(0) 1 | v(0))v(0)T minus p(h(1) 1 | v(1))v(1)T)b⟵ b + ε (v(0) minus v(1))c⟵ c + ε (p (h(0) 1 | v(0)) minus p (h(1) 1 | v(1)))

ALGORITHM 1 Pseudocodes for training the RBM with glia chain

Computational Intelligence and Neuroscience 5

characteristics of EEG signals can provide salient in-formation regarding emotion states

e results validate that the glia chain can improve thelearning performance of the deep structurerough the gliachains the hidden layer units in DBN-GC can transferinformation to each other and the DBN-GC model canobtain the correlation information between the same hiddenlayer units us the improved DNB model can learn morediscriminative features For EEG-based emotion recognitiontask the DNN-GC can mine interchannel correlation andutilize interchannel information which is often ignored byother emotion recognition studies

32 Results of the DBN-GC-Based Ensemble Deep LearningModel en the proposed DBN-GC-based ensemble deeplearning model is employed to perform the emotion rec-ognition task Each parallel DBN-GC in the ensemble deeplearning model has 3 hidden layers e numbers of hiddenneuron of each parallel DBN-GC are listed in Table 3rough the ve parallel DBN-GCs the samplesrsquo featuredimensionality is reduced from 664 to 350

Table 4 compares the average recognition performancebetween the proposed DBN-GC-based ensemble deeplearning model and several deep learning-based studies onthe same database Specically Tripathi et al divided the

Input layer Parallel DBN-GCs Discriminative RBM

v1 h1 h2 hn v1

h1

Label

x (F0)

DBN-GC 1

DBN-GC 2

DBN-GC 5

x (F1)

x (F2)

x (F5)

s1 (x(F1))

s2 (x(F2))

s5 (x(F5))

Figure 5 Architecture of DBN-GC-based ensemble deep learning model

Table 2 Hyperparameters of DBNs and DBN-GCs

F6 F7 F8DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC8

v1 128 128 216 216 320 320h1 100 100 200 200 300 300h2 80 80 135 135 280 280h3 45 45 90 90 125 125o1 2 2 2 2 2 2Note v1 represents the input layer o1 represents the output layer and hi represents the i-th hidden layer

6 Computational Intelligence and Neuroscience

8064 readings per channel into 10 batches For each batch 9characteristics such as mean and variance were extracteden a deep neural network and a convolutional neural

network were used for emotion recognition [29] Li et alproposed the frame-based EEG features through wavelet andscalogram transform and designed a hybrid deep learningmodel which combined CNN and RNN [37] In addition Liet al also trained a two-layer DBN to extract high-levelfeatures for each channel and then a SVM with RBF kernelis employed as the classier [38] Wang and Shang presentedthe DBN-based system that extracted features from rawphysiological data and 3 classiers were built to predictemotion states [26] In view of that the above studies did notintroduce F1-score as the metrics for recognition perfor-mance the average recognition performance of the proposedmodel is also compared with that of reference [32] In thisreference Koelstra et al analyzed the central nervous system(CNS) peripheral nervous system (PNS) and multimediacontent analysis features for emotion recognition Consid-ering the proposed DBN-GC-based method in this paper isbased on the EEG signal Table 4 only lists the recognitionresults of the CNS feature-based single modality in reference[32] e DEAP dataset is used in all references in Table 4and the trials are divided into two classes for valence andarousal respectively (ratings divided as more than 5 and lessthan 5) in all references in Table 4

As can be seen from Table 4 the performance of theDBN-GC-based ensemble deep learning model regardingthe recognition accuracy outperforms most of the abovedeep classiers Meanwhile the F1-scores achieved by theproposed model are obviously superior to 05830 and 05630reported by reference [32] e proposed method provides07683 mean recognition accuracy (MRA) on valence whichis lower than the highest MRA reported on valence (08141)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1Ac

cura

cy (a

rous

al)

(a)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

Accu

racy

(val

ence

)

(b)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

F1 sc

ore (

arou

sal)

(c)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

F1 sc

ore (

vale

nce)

(d)

Figure 6 Comparison of emotion recognition results between DBN and DBN-GC

Table 3 Hyperparameters of each parallel DBN-GC in the en-semble deep learning model

h1 h2 h3DBN-GC1 110 105 90DBN-GC2 145 145 120DBN-GC3 55 55 45DBN-GC4 120 55 45DBN-GC5 155 80 50Sum 585 440 350

Table 4 Average recognition performance comparison betweenthe DBN-GC-based ensemble deep learning model and severalreported studies

Arousal Valence

Accuracy F1-score Accuracy F1-

scoreDeep neural network [29] 07313 mdash 07578 mdashConvolutional neuralnetwork [29] 07336 mdash 08141 mdash

CNNRNN [37] 07412 mdash 07206 mdashDBN-SVM [37] 06420 mdash 05840Wang and Shang [26] 05120 mdash 06090 mdashCNS feature-based singlemodality [32] 06200 05830 05760 05630

DBN-GC-based ensembledeep learning model 07592 06931 07683 07015

Computational Intelligence and Neuroscience 7

is may be due to that the CNN model proposed by Tri-pathi et al benets from suiexclcient training and validatinginstances In addition the performance of the ensemble deeplearning model also outperforms the three DBN-GCs inTable 3 which are trained by a single-feature subset isindicates that time domain characteristics frequency do-main characteristics and time-frequency characteristics ofEEG signals should be complementary in emotion recog-nition and the proposed method can integrate dishyerenttypes of characteristics eshyectively

33 Parameter Selection Each parallel DBN-GC in theproposed ensemble deep learning model contains threeimportant parameters the weight coeiexclcient of glia eshyectvalue α the attenuation factor β and the glia threshold θese three parameters will determine the eshyect of glia cellson DBN and then ashyect the performance of the proposedmodel Since there is no self-adaptive adjustment methodthe three parameters are set manually and take 20 values inthe range of 0 to 1 respectively with the interval 005 Underthe dishyerent values of each parameter the MRA on valence

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Weight coefficient of glial effect

ArousalValence

Figure 7 MRA of the proposed ensemble deep learning model with dishyerent values of glia eshyect weight

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Attenuation factor

ArousalValence

Figure 8 MRA of the proposed ensemble deep learning model with dishyerent values of attenuation factor

8 Computational Intelligence and Neuroscience

and the MRA on arousal are taken as the nal evaluationbasis and the analysis results are shown in Figures 7ndash9

As can be seen from Figure 7 when the glia eshyect weightis between 005 and 1 the MRA on arousal as well as theMRA on valence copyuctuates continuously e highest MRAon arousal (7612) is obtained as the glia eshyect weight is setto 075 For MRA on valence the higher values will appearwhen the weight value is close to 015 or 080 Taking intoaccount these two indicators simultaneously it is appro-priate to set the weight coeiexclcient to 080

Figure 8 shows the results of arousal classication andvalence classication with dishyerent values of the attenuationfactor When the value of attenuation factor is between 005and 035 the MRA on arousal as well as the MRA on valencecopyuctuates greatly With the attenuation factor increased to04 both the MRA on arousal and the MRA on valenceincrease rapidly Once the attenuation factor exceeds 05 thetwo MRAs have been decreasing slowly us it is appro-priate to set the attenuation factor to 040 or 050

Figure 9 shows the results of arousal classication andvalence classication with dishyerent values of glia thresholdAlthough the highest value of MRA on arousal occurs withthe glia threshold set to 035 the MRA is more stable whenthe glia threshold is between 065 and 1 For the MRA onvalence its value has been rising slowly when the attenuationfactor exceeds 025 It is appropriate that the glia threshold iswithin the range of 070 to 080

4 Conclusions

In this paper we presented an ensemble deep learning modelwhich integrates parallel DBN-GCs and a discriminativeRBM for emotion recognition e interchannel correlationinformation from multichannel EEG signals which isoften neglected contains salient information regarding to

emotion states and the chain structure of glia cells in DBN-GC has the ability in mining interchannel correlation in-formation In addition the time domain characteristicsfrequency domain characteristics and time-frequencycharacteristics of EEG signals should be complementaryfor emotion recognition and the ensemble deep learningframework benets from the comprehensive fusion ofmultidomain feature abstractions e reliability of theDBN-GC and the ensemble deep learning framework-basedfusion methods is validated by the experiments based onDEAP database

Data Availability

e DEAP dataset used in our manuscript is a dataset foremotion analysis using electroencephalogram (EEG) andphysiological and video signals e DEAP dataset isavailable at httpwwweecsqmulacukmmvdatasetsdeap Anyone interested in using this dataset will have toprint sign and scan an EULA (end-user license agreement)and return it via e-mail en a username and password todownload the data will be provided e dataset was rstpresented in reference [32] DEAP a database for emotionanalysis using physiological signals

Conflicts of Interest

e authors declare that there are no concopyicts of interestregarding the publication of this paper

Acknowledgments

is paper was supported by the China National NatureScience Foundation (Nos 61502150 and 61300124) Foun-dation for University Key Teacher by Henan Province underGrant no 2015GGJS-068 Fundamental Research Funds for

ArousalValence

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Glia threshold

Figure 9 MRA of the proposed ensemble deep learning model with dishyerent values of glia threshold

Computational Intelligence and Neuroscience 9

the Universities of Henan Province under Grant noNSFRF1616 Foundation for Scientific and TechnologicalProject of Henan Province under Grant no 172102210279and Key Scientific Research Projects of Universities inHenan (No 19A520004)

References

[1] M-F Pedro M-P Arquimedes J-C Antoni andJ M B Rubio ldquoEvaluating the research in automatic emotionrecognitionrdquo IETE Technical Review vol 31 no 3 pp 220ndash232 2014

[2] M Chen X He J Yang and H Zhang ldquo3-D convolutionalrecurrent neural networks with attention model for speechemotion recognitionrdquo IEEE Signal Processing Letters vol 25no 10 pp 1440ndash1444 2018

[3] Z-T Liu Q Xie M Wu W-H Cao Y Mei and J-W MaoldquoSpeech emotion recognition based on an improved brainemotion learning modelrdquo Neurocomputing vol 309pp 145ndash156 2018

[4] C-N Anaqnostopoulos T Iliou and I Giannoukos ldquoFea-tures and classifiers for emotion recognition from speech asurvey from 2000 to 2011rdquo Artificial Intelligence Reviewvol 43 no 2 pp 155ndash177 2015

[5] S-E Kahou V Michalski K Konda et al ldquoRecurrent neuralnetworks for emotion recognition in videordquo in Proceedings of2015 ACM International Conference onMultimodal Interactionpp 467ndash474 ACM Seattle WC USA November 2015

[6] Y Huang J Yang P Liao and J Pan ldquoFusion of facial ex-pressions and EEG for multimodal emotion recognitionrdquoComputational Intelligence and Neuroscience vol 2017 Ar-ticle ID 2107451 8 pages 2017

[7] X-W Wang D Nie and B-L Lu ldquoEmotional state classi-fication from EEG data using machine learning approachrdquoNeurocomputing vol 129 pp 94ndash106 2014

[8] H J Yoon and S Y Chung ldquoEEG-based emotion estimationusing Bayesian weighted-log-posterior function and percep-tron convergence algorithmrdquo Computers in Biology andMedicine vol 43 no 12 pp 2230ndash2237 2013

[9] A-N Abeer H Manar A-O Yousef and A-W AreejldquoReview and classification of emotion recognition based onEEG brain-computer interface system research a systematicreviewrdquo Applied Sciences-Basel vol 7 no 12 p 1239 2017

[10] Y Liu and O Sourina ldquoReal-time fractal-based valence levelrecognition from EEGrdquo in Transactions on ComputationalScience XVIII vol 7848 pp101ndash120 Springer BerlinGermany 2013

[11] M Murugappan N Ramachandran and Y Sazali ldquoClassi-fication of human emotion from EEG using discrete wavelettransformrdquo Journal of Biomedical Science and Engineeringvol 3 no 4 pp 390ndash396 2010

[12] T-Y Chai W-S San C-S Tan andM Rizon ldquoClassificationof human emotions from EEG signals using statistical featuresand neural networkrdquo International Journal of IntegratedEngineering vol 1 no 3 pp 1ndash6 2009

[13] C Frantzidis C Bratsas C Papadelis E KonstantinidisC Pappas and P D Bamidis ldquoToward emotion awarecomputing an integrated approach using multichannelneurophysiological recordings and affective visual stimulirdquoIEEE Transactions on Information Technology in Biomedicinevol 14 no 3 pp 589ndash597 2010

[14] B Hjorth ldquoEEG analysis based on time domain propertiesrdquoElectroencephalography and Clinical Neurophysiology vol 29no 3 pp 306ndash310 1970

[15] K Ansari-asl G Chanel and T Pun ldquoA channel selectionmethod for EEG classification in emotion assessment basedon synchronization likelihoodrdquo in Proceedings of 15th Eu-ropean Signal Processing Conference pp 1241ndash1245 SpringerPoznan Poland September 2007

[16] E Kroupi A Yazdani and T Ebrahimi ldquoEEG correlates ofdifferent emotional states elicited during watching musicvideosrdquo in Proceedings of 4th International Conference onAffective Computing and Intelligent Interaction pp 457ndash466Springer Memphis TN USA October 2011

[17] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from EEG using higher order crossingsrdquo IEEETransactions on Information Technology in Biomedicinevol 14 no 2 pp 186ndash197 2010

[18] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from brain signals using hybrid adaptive filtering andhigher order crossings analysisrdquo IEEE Transactions on Af-fective Computing vol 1 no 2 pp 81ndash97 2010

[19] G K Verma and U S Tiwary ldquoMultimodal fusion frame-work a multiresolution approach for emotion classificationand recognition from physiological signalsrdquo NeuroImagevol 102 pp 162ndash172 2014

[20] S K Hadjidimitriou and L J Hadjileontiadis ldquoToward anEEG-based recognition of music liking using time-frequencyanalysisrdquo IEEE Transactions on Biomedical Engineeringvol 59 no 12 pp 3498ndash3510 2012

[21] M Akin ldquoComparison of wavelet transform and FFTmethodsin the analysis of EEG signalsrdquo Journal of Medical Systemsvol 26 no 3 pp 241ndash247 2002

[22] M Murugappan M Rizon S Yaacob et al ldquoEEG featureextraction for classifying emotions using FCM and FKMrdquoInternational Journal of Computers and Communicationsvol 1 no 2 pp 21ndash25 2007

[23] A Samara M-L-R Menezes and L Galway ldquoFeature ex-traction for emotion recognition and modeling using neu-rophysiological datardquo in Proceedings of 15th InternationalConference on Ubiquitous Computing and Communicationsand 2016 International Symposium on Cyberspace and Security(IUCC-CSS) pp 138ndash144 IEEE Granada Spain December2016

[24] N Jadhav R Manthalkar and Y Joshi ldquoElectroencephalography-Based emotion recognition using gray-level co-occurrence matrix featuresrdquo in Proceedings of InternationalConference on Computer Vision and Image Processingpp 335ndash343 Springer Roorkee Uttarakhand December2016

[25] N ammasan K Moriyama K-i Fukui and M NumaoldquoFamiliarity effects in EEG-based emotion recognitionrdquo BrainInformatics vol 4 no 1 pp 39ndash50 2016

[26] D Wang and Y Shang ldquoModeling physiological data withdeep belief networksrdquo International Journal of Informationand Education Technology vol 3 no 5 pp 505ndash511 2013

[27] H Xu andK-N Plataniotis ldquoAffective states classification usingEEG and semi-supervised deep learning approachesrdquo in Pro-ceedings of IEEE 18th International Workshop on MultimediaSignal pp 1ndash6 IEEE Montreal Canada September 2016

[28] X Li D Song P Zhang Y Hou and B Hu ldquoDeep fusion ofmulti-channel neurophysiological signal for emotion recog-nition and monitoringrdquo International Journal of Data Miningand Bioinformatics vol 18 no 1 pp 1ndash27 2017

[29] S Tripathi S Acharya R-D Sharma et al ldquoUsing deep andconvolutional neural networks for accurate emotion classi-fication on DEAP datasetrdquo in Proceedings of Twenty-NinthAAAI Conference on Innovative Applications of Artificial

10 Computational Intelligence and Neuroscience

Intelligence pp 4746ndash4752 AAAI San Francisco CA USAFebruary 2017

[30] C Ikuta Y Uwate and Y Nishio ldquoMulti-layer perceptronwith positive and negative pulse glia chain for solving two-spirals problemrdquo in Proceedings of 2012 International JointConference on Neural Networks pp 1ndash6 IEEE BrisbaneAustralia June 2012

[31] Z-Q Geng and Y-K Zhang ldquoAn improved deep beliefnetwork inspired by glia chainsrdquo Acta Automatica Sinicavol 42 no 6 pp 943ndash952 2016

[32] S Koelstra C Muehl M Soleymani et al ldquoDEAP a databasefor emotion analysis using physiological signalsrdquo IEEETransaction on Affective Computing vol 3 no 1 pp 18ndash312012

[33] M Khezri M Firoozabadi and A R Sharafat ldquoReliableemotion recognition system based on dynamic adaptive fu-sion of forehead biopotentials and physiological signalsrdquoComputer Methods and Programs in Biomedicine vol 122no 2 pp 149ndash164 2015

[34] J Atkinson and D Campos ldquoImproving BCI-based emotionrecognition by combining EEG feature selection and kernelclassifiersrdquo Expert Systems with Applications vol 47pp 35ndash41 2016

[35] C Li C Xu and Z Feng ldquoAnalysis of physiological foremotion recognition with the IRS modelrdquo Neurocomputingvol 178 pp 103ndash111 2016

[36] Z Yin M-Y Zhao Y-X Wang J Yang and J ZhangldquoRecognition of emotions using multimodal physiologicalsignals and an ensemble deep learning modelrdquo ComputerMethods and Programs in Biomedicine vol 140 pp 93ndash1102017

[37] X Li D-W Song P Zhang et al ldquoEmotion recognition frommulti-channel EEG data through convolutional recurrentneural networkrdquo in Proceedings of IEEE International Con-ference on Bioinformatics and Biomedicine pp 352ndash359 IEEEShenzhen China December 2016

[38] X Li P Zhang and D-W Song ldquoEEG based emotionidentification using unsupervised deep feature learningrdquo inProceedings of SIGIR2015 Workshop on Neuro-PhysiologicalMethods in IR Research August 2015

Computational Intelligence and Neuroscience 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 6: RecognitionofEmotionsUsingMultichannelEEGDataand DBN-GC …downloads.hindawi.com/journals/cin/2018/9750904.pdf · 2019-07-30 · types of deep learning approaches, stacked denoising

characteristics of EEG signals can provide salient in-formation regarding emotion states

e results validate that the glia chain can improve thelearning performance of the deep structurerough the gliachains the hidden layer units in DBN-GC can transferinformation to each other and the DBN-GC model canobtain the correlation information between the same hiddenlayer units us the improved DNB model can learn morediscriminative features For EEG-based emotion recognitiontask the DNN-GC can mine interchannel correlation andutilize interchannel information which is often ignored byother emotion recognition studies

32 Results of the DBN-GC-Based Ensemble Deep LearningModel en the proposed DBN-GC-based ensemble deeplearning model is employed to perform the emotion rec-ognition task Each parallel DBN-GC in the ensemble deeplearning model has 3 hidden layers e numbers of hiddenneuron of each parallel DBN-GC are listed in Table 3rough the ve parallel DBN-GCs the samplesrsquo featuredimensionality is reduced from 664 to 350

Table 4 compares the average recognition performancebetween the proposed DBN-GC-based ensemble deeplearning model and several deep learning-based studies onthe same database Specically Tripathi et al divided the

Input layer Parallel DBN-GCs Discriminative RBM

v1 h1 h2 hn v1

h1

Label

x (F0)

DBN-GC 1

DBN-GC 2

DBN-GC 5

x (F1)

x (F2)

x (F5)

s1 (x(F1))

s2 (x(F2))

s5 (x(F5))

Figure 5 Architecture of DBN-GC-based ensemble deep learning model

Table 2 Hyperparameters of DBNs and DBN-GCs

F6 F7 F8DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC8

v1 128 128 216 216 320 320h1 100 100 200 200 300 300h2 80 80 135 135 280 280h3 45 45 90 90 125 125o1 2 2 2 2 2 2Note v1 represents the input layer o1 represents the output layer and hi represents the i-th hidden layer

6 Computational Intelligence and Neuroscience

8064 readings per channel into 10 batches For each batch 9characteristics such as mean and variance were extracteden a deep neural network and a convolutional neural

network were used for emotion recognition [29] Li et alproposed the frame-based EEG features through wavelet andscalogram transform and designed a hybrid deep learningmodel which combined CNN and RNN [37] In addition Liet al also trained a two-layer DBN to extract high-levelfeatures for each channel and then a SVM with RBF kernelis employed as the classier [38] Wang and Shang presentedthe DBN-based system that extracted features from rawphysiological data and 3 classiers were built to predictemotion states [26] In view of that the above studies did notintroduce F1-score as the metrics for recognition perfor-mance the average recognition performance of the proposedmodel is also compared with that of reference [32] In thisreference Koelstra et al analyzed the central nervous system(CNS) peripheral nervous system (PNS) and multimediacontent analysis features for emotion recognition Consid-ering the proposed DBN-GC-based method in this paper isbased on the EEG signal Table 4 only lists the recognitionresults of the CNS feature-based single modality in reference[32] e DEAP dataset is used in all references in Table 4and the trials are divided into two classes for valence andarousal respectively (ratings divided as more than 5 and lessthan 5) in all references in Table 4

As can be seen from Table 4 the performance of theDBN-GC-based ensemble deep learning model regardingthe recognition accuracy outperforms most of the abovedeep classiers Meanwhile the F1-scores achieved by theproposed model are obviously superior to 05830 and 05630reported by reference [32] e proposed method provides07683 mean recognition accuracy (MRA) on valence whichis lower than the highest MRA reported on valence (08141)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1Ac

cura

cy (a

rous

al)

(a)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

Accu

racy

(val

ence

)

(b)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

F1 sc

ore (

arou

sal)

(c)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

F1 sc

ore (

vale

nce)

(d)

Figure 6 Comparison of emotion recognition results between DBN and DBN-GC

Table 3 Hyperparameters of each parallel DBN-GC in the en-semble deep learning model

h1 h2 h3DBN-GC1 110 105 90DBN-GC2 145 145 120DBN-GC3 55 55 45DBN-GC4 120 55 45DBN-GC5 155 80 50Sum 585 440 350

Table 4 Average recognition performance comparison betweenthe DBN-GC-based ensemble deep learning model and severalreported studies

Arousal Valence

Accuracy F1-score Accuracy F1-

scoreDeep neural network [29] 07313 mdash 07578 mdashConvolutional neuralnetwork [29] 07336 mdash 08141 mdash

CNNRNN [37] 07412 mdash 07206 mdashDBN-SVM [37] 06420 mdash 05840Wang and Shang [26] 05120 mdash 06090 mdashCNS feature-based singlemodality [32] 06200 05830 05760 05630

DBN-GC-based ensembledeep learning model 07592 06931 07683 07015

Computational Intelligence and Neuroscience 7

is may be due to that the CNN model proposed by Tri-pathi et al benets from suiexclcient training and validatinginstances In addition the performance of the ensemble deeplearning model also outperforms the three DBN-GCs inTable 3 which are trained by a single-feature subset isindicates that time domain characteristics frequency do-main characteristics and time-frequency characteristics ofEEG signals should be complementary in emotion recog-nition and the proposed method can integrate dishyerenttypes of characteristics eshyectively

33 Parameter Selection Each parallel DBN-GC in theproposed ensemble deep learning model contains threeimportant parameters the weight coeiexclcient of glia eshyectvalue α the attenuation factor β and the glia threshold θese three parameters will determine the eshyect of glia cellson DBN and then ashyect the performance of the proposedmodel Since there is no self-adaptive adjustment methodthe three parameters are set manually and take 20 values inthe range of 0 to 1 respectively with the interval 005 Underthe dishyerent values of each parameter the MRA on valence

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Weight coefficient of glial effect

ArousalValence

Figure 7 MRA of the proposed ensemble deep learning model with dishyerent values of glia eshyect weight

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Attenuation factor

ArousalValence

Figure 8 MRA of the proposed ensemble deep learning model with dishyerent values of attenuation factor

8 Computational Intelligence and Neuroscience

and the MRA on arousal are taken as the nal evaluationbasis and the analysis results are shown in Figures 7ndash9

As can be seen from Figure 7 when the glia eshyect weightis between 005 and 1 the MRA on arousal as well as theMRA on valence copyuctuates continuously e highest MRAon arousal (7612) is obtained as the glia eshyect weight is setto 075 For MRA on valence the higher values will appearwhen the weight value is close to 015 or 080 Taking intoaccount these two indicators simultaneously it is appro-priate to set the weight coeiexclcient to 080

Figure 8 shows the results of arousal classication andvalence classication with dishyerent values of the attenuationfactor When the value of attenuation factor is between 005and 035 the MRA on arousal as well as the MRA on valencecopyuctuates greatly With the attenuation factor increased to04 both the MRA on arousal and the MRA on valenceincrease rapidly Once the attenuation factor exceeds 05 thetwo MRAs have been decreasing slowly us it is appro-priate to set the attenuation factor to 040 or 050

Figure 9 shows the results of arousal classication andvalence classication with dishyerent values of glia thresholdAlthough the highest value of MRA on arousal occurs withthe glia threshold set to 035 the MRA is more stable whenthe glia threshold is between 065 and 1 For the MRA onvalence its value has been rising slowly when the attenuationfactor exceeds 025 It is appropriate that the glia threshold iswithin the range of 070 to 080

4 Conclusions

In this paper we presented an ensemble deep learning modelwhich integrates parallel DBN-GCs and a discriminativeRBM for emotion recognition e interchannel correlationinformation from multichannel EEG signals which isoften neglected contains salient information regarding to

emotion states and the chain structure of glia cells in DBN-GC has the ability in mining interchannel correlation in-formation In addition the time domain characteristicsfrequency domain characteristics and time-frequencycharacteristics of EEG signals should be complementaryfor emotion recognition and the ensemble deep learningframework benets from the comprehensive fusion ofmultidomain feature abstractions e reliability of theDBN-GC and the ensemble deep learning framework-basedfusion methods is validated by the experiments based onDEAP database

Data Availability

e DEAP dataset used in our manuscript is a dataset foremotion analysis using electroencephalogram (EEG) andphysiological and video signals e DEAP dataset isavailable at httpwwweecsqmulacukmmvdatasetsdeap Anyone interested in using this dataset will have toprint sign and scan an EULA (end-user license agreement)and return it via e-mail en a username and password todownload the data will be provided e dataset was rstpresented in reference [32] DEAP a database for emotionanalysis using physiological signals

Conflicts of Interest

e authors declare that there are no concopyicts of interestregarding the publication of this paper

Acknowledgments

is paper was supported by the China National NatureScience Foundation (Nos 61502150 and 61300124) Foun-dation for University Key Teacher by Henan Province underGrant no 2015GGJS-068 Fundamental Research Funds for

ArousalValence

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Glia threshold

Figure 9 MRA of the proposed ensemble deep learning model with dishyerent values of glia threshold

Computational Intelligence and Neuroscience 9

the Universities of Henan Province under Grant noNSFRF1616 Foundation for Scientific and TechnologicalProject of Henan Province under Grant no 172102210279and Key Scientific Research Projects of Universities inHenan (No 19A520004)

References

[1] M-F Pedro M-P Arquimedes J-C Antoni andJ M B Rubio ldquoEvaluating the research in automatic emotionrecognitionrdquo IETE Technical Review vol 31 no 3 pp 220ndash232 2014

[2] M Chen X He J Yang and H Zhang ldquo3-D convolutionalrecurrent neural networks with attention model for speechemotion recognitionrdquo IEEE Signal Processing Letters vol 25no 10 pp 1440ndash1444 2018

[3] Z-T Liu Q Xie M Wu W-H Cao Y Mei and J-W MaoldquoSpeech emotion recognition based on an improved brainemotion learning modelrdquo Neurocomputing vol 309pp 145ndash156 2018

[4] C-N Anaqnostopoulos T Iliou and I Giannoukos ldquoFea-tures and classifiers for emotion recognition from speech asurvey from 2000 to 2011rdquo Artificial Intelligence Reviewvol 43 no 2 pp 155ndash177 2015

[5] S-E Kahou V Michalski K Konda et al ldquoRecurrent neuralnetworks for emotion recognition in videordquo in Proceedings of2015 ACM International Conference onMultimodal Interactionpp 467ndash474 ACM Seattle WC USA November 2015

[6] Y Huang J Yang P Liao and J Pan ldquoFusion of facial ex-pressions and EEG for multimodal emotion recognitionrdquoComputational Intelligence and Neuroscience vol 2017 Ar-ticle ID 2107451 8 pages 2017

[7] X-W Wang D Nie and B-L Lu ldquoEmotional state classi-fication from EEG data using machine learning approachrdquoNeurocomputing vol 129 pp 94ndash106 2014

[8] H J Yoon and S Y Chung ldquoEEG-based emotion estimationusing Bayesian weighted-log-posterior function and percep-tron convergence algorithmrdquo Computers in Biology andMedicine vol 43 no 12 pp 2230ndash2237 2013

[9] A-N Abeer H Manar A-O Yousef and A-W AreejldquoReview and classification of emotion recognition based onEEG brain-computer interface system research a systematicreviewrdquo Applied Sciences-Basel vol 7 no 12 p 1239 2017

[10] Y Liu and O Sourina ldquoReal-time fractal-based valence levelrecognition from EEGrdquo in Transactions on ComputationalScience XVIII vol 7848 pp101ndash120 Springer BerlinGermany 2013

[11] M Murugappan N Ramachandran and Y Sazali ldquoClassi-fication of human emotion from EEG using discrete wavelettransformrdquo Journal of Biomedical Science and Engineeringvol 3 no 4 pp 390ndash396 2010

[12] T-Y Chai W-S San C-S Tan andM Rizon ldquoClassificationof human emotions from EEG signals using statistical featuresand neural networkrdquo International Journal of IntegratedEngineering vol 1 no 3 pp 1ndash6 2009

[13] C Frantzidis C Bratsas C Papadelis E KonstantinidisC Pappas and P D Bamidis ldquoToward emotion awarecomputing an integrated approach using multichannelneurophysiological recordings and affective visual stimulirdquoIEEE Transactions on Information Technology in Biomedicinevol 14 no 3 pp 589ndash597 2010

[14] B Hjorth ldquoEEG analysis based on time domain propertiesrdquoElectroencephalography and Clinical Neurophysiology vol 29no 3 pp 306ndash310 1970

[15] K Ansari-asl G Chanel and T Pun ldquoA channel selectionmethod for EEG classification in emotion assessment basedon synchronization likelihoodrdquo in Proceedings of 15th Eu-ropean Signal Processing Conference pp 1241ndash1245 SpringerPoznan Poland September 2007

[16] E Kroupi A Yazdani and T Ebrahimi ldquoEEG correlates ofdifferent emotional states elicited during watching musicvideosrdquo in Proceedings of 4th International Conference onAffective Computing and Intelligent Interaction pp 457ndash466Springer Memphis TN USA October 2011

[17] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from EEG using higher order crossingsrdquo IEEETransactions on Information Technology in Biomedicinevol 14 no 2 pp 186ndash197 2010

[18] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from brain signals using hybrid adaptive filtering andhigher order crossings analysisrdquo IEEE Transactions on Af-fective Computing vol 1 no 2 pp 81ndash97 2010

[19] G K Verma and U S Tiwary ldquoMultimodal fusion frame-work a multiresolution approach for emotion classificationand recognition from physiological signalsrdquo NeuroImagevol 102 pp 162ndash172 2014

[20] S K Hadjidimitriou and L J Hadjileontiadis ldquoToward anEEG-based recognition of music liking using time-frequencyanalysisrdquo IEEE Transactions on Biomedical Engineeringvol 59 no 12 pp 3498ndash3510 2012

[21] M Akin ldquoComparison of wavelet transform and FFTmethodsin the analysis of EEG signalsrdquo Journal of Medical Systemsvol 26 no 3 pp 241ndash247 2002

[22] M Murugappan M Rizon S Yaacob et al ldquoEEG featureextraction for classifying emotions using FCM and FKMrdquoInternational Journal of Computers and Communicationsvol 1 no 2 pp 21ndash25 2007

[23] A Samara M-L-R Menezes and L Galway ldquoFeature ex-traction for emotion recognition and modeling using neu-rophysiological datardquo in Proceedings of 15th InternationalConference on Ubiquitous Computing and Communicationsand 2016 International Symposium on Cyberspace and Security(IUCC-CSS) pp 138ndash144 IEEE Granada Spain December2016

[24] N Jadhav R Manthalkar and Y Joshi ldquoElectroencephalography-Based emotion recognition using gray-level co-occurrence matrix featuresrdquo in Proceedings of InternationalConference on Computer Vision and Image Processingpp 335ndash343 Springer Roorkee Uttarakhand December2016

[25] N ammasan K Moriyama K-i Fukui and M NumaoldquoFamiliarity effects in EEG-based emotion recognitionrdquo BrainInformatics vol 4 no 1 pp 39ndash50 2016

[26] D Wang and Y Shang ldquoModeling physiological data withdeep belief networksrdquo International Journal of Informationand Education Technology vol 3 no 5 pp 505ndash511 2013

[27] H Xu andK-N Plataniotis ldquoAffective states classification usingEEG and semi-supervised deep learning approachesrdquo in Pro-ceedings of IEEE 18th International Workshop on MultimediaSignal pp 1ndash6 IEEE Montreal Canada September 2016

[28] X Li D Song P Zhang Y Hou and B Hu ldquoDeep fusion ofmulti-channel neurophysiological signal for emotion recog-nition and monitoringrdquo International Journal of Data Miningand Bioinformatics vol 18 no 1 pp 1ndash27 2017

[29] S Tripathi S Acharya R-D Sharma et al ldquoUsing deep andconvolutional neural networks for accurate emotion classi-fication on DEAP datasetrdquo in Proceedings of Twenty-NinthAAAI Conference on Innovative Applications of Artificial

10 Computational Intelligence and Neuroscience

Intelligence pp 4746ndash4752 AAAI San Francisco CA USAFebruary 2017

[30] C Ikuta Y Uwate and Y Nishio ldquoMulti-layer perceptronwith positive and negative pulse glia chain for solving two-spirals problemrdquo in Proceedings of 2012 International JointConference on Neural Networks pp 1ndash6 IEEE BrisbaneAustralia June 2012

[31] Z-Q Geng and Y-K Zhang ldquoAn improved deep beliefnetwork inspired by glia chainsrdquo Acta Automatica Sinicavol 42 no 6 pp 943ndash952 2016

[32] S Koelstra C Muehl M Soleymani et al ldquoDEAP a databasefor emotion analysis using physiological signalsrdquo IEEETransaction on Affective Computing vol 3 no 1 pp 18ndash312012

[33] M Khezri M Firoozabadi and A R Sharafat ldquoReliableemotion recognition system based on dynamic adaptive fu-sion of forehead biopotentials and physiological signalsrdquoComputer Methods and Programs in Biomedicine vol 122no 2 pp 149ndash164 2015

[34] J Atkinson and D Campos ldquoImproving BCI-based emotionrecognition by combining EEG feature selection and kernelclassifiersrdquo Expert Systems with Applications vol 47pp 35ndash41 2016

[35] C Li C Xu and Z Feng ldquoAnalysis of physiological foremotion recognition with the IRS modelrdquo Neurocomputingvol 178 pp 103ndash111 2016

[36] Z Yin M-Y Zhao Y-X Wang J Yang and J ZhangldquoRecognition of emotions using multimodal physiologicalsignals and an ensemble deep learning modelrdquo ComputerMethods and Programs in Biomedicine vol 140 pp 93ndash1102017

[37] X Li D-W Song P Zhang et al ldquoEmotion recognition frommulti-channel EEG data through convolutional recurrentneural networkrdquo in Proceedings of IEEE International Con-ference on Bioinformatics and Biomedicine pp 352ndash359 IEEEShenzhen China December 2016

[38] X Li P Zhang and D-W Song ldquoEEG based emotionidentification using unsupervised deep feature learningrdquo inProceedings of SIGIR2015 Workshop on Neuro-PhysiologicalMethods in IR Research August 2015

Computational Intelligence and Neuroscience 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 7: RecognitionofEmotionsUsingMultichannelEEGDataand DBN-GC …downloads.hindawi.com/journals/cin/2018/9750904.pdf · 2019-07-30 · types of deep learning approaches, stacked denoising

8064 readings per channel into 10 batches For each batch 9characteristics such as mean and variance were extracteden a deep neural network and a convolutional neural

network were used for emotion recognition [29] Li et alproposed the frame-based EEG features through wavelet andscalogram transform and designed a hybrid deep learningmodel which combined CNN and RNN [37] In addition Liet al also trained a two-layer DBN to extract high-levelfeatures for each channel and then a SVM with RBF kernelis employed as the classier [38] Wang and Shang presentedthe DBN-based system that extracted features from rawphysiological data and 3 classiers were built to predictemotion states [26] In view of that the above studies did notintroduce F1-score as the metrics for recognition perfor-mance the average recognition performance of the proposedmodel is also compared with that of reference [32] In thisreference Koelstra et al analyzed the central nervous system(CNS) peripheral nervous system (PNS) and multimediacontent analysis features for emotion recognition Consid-ering the proposed DBN-GC-based method in this paper isbased on the EEG signal Table 4 only lists the recognitionresults of the CNS feature-based single modality in reference[32] e DEAP dataset is used in all references in Table 4and the trials are divided into two classes for valence andarousal respectively (ratings divided as more than 5 and lessthan 5) in all references in Table 4

As can be seen from Table 4 the performance of theDBN-GC-based ensemble deep learning model regardingthe recognition accuracy outperforms most of the abovedeep classiers Meanwhile the F1-scores achieved by theproposed model are obviously superior to 05830 and 05630reported by reference [32] e proposed method provides07683 mean recognition accuracy (MRA) on valence whichis lower than the highest MRA reported on valence (08141)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1Ac

cura

cy (a

rous

al)

(a)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

Accu

racy

(val

ence

)

(b)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

F1 sc

ore (

arou

sal)

(c)

DBN6 DBN-GC6 DBN7 DBN-GC7 DBN8 DBN-GC80

02

04

06

08

1

F1 sc

ore (

vale

nce)

(d)

Figure 6 Comparison of emotion recognition results between DBN and DBN-GC

Table 3 Hyperparameters of each parallel DBN-GC in the en-semble deep learning model

h1 h2 h3DBN-GC1 110 105 90DBN-GC2 145 145 120DBN-GC3 55 55 45DBN-GC4 120 55 45DBN-GC5 155 80 50Sum 585 440 350

Table 4 Average recognition performance comparison betweenthe DBN-GC-based ensemble deep learning model and severalreported studies

Arousal Valence

Accuracy F1-score Accuracy F1-

scoreDeep neural network [29] 07313 mdash 07578 mdashConvolutional neuralnetwork [29] 07336 mdash 08141 mdash

CNNRNN [37] 07412 mdash 07206 mdashDBN-SVM [37] 06420 mdash 05840Wang and Shang [26] 05120 mdash 06090 mdashCNS feature-based singlemodality [32] 06200 05830 05760 05630

DBN-GC-based ensembledeep learning model 07592 06931 07683 07015

Computational Intelligence and Neuroscience 7

is may be due to that the CNN model proposed by Tri-pathi et al benets from suiexclcient training and validatinginstances In addition the performance of the ensemble deeplearning model also outperforms the three DBN-GCs inTable 3 which are trained by a single-feature subset isindicates that time domain characteristics frequency do-main characteristics and time-frequency characteristics ofEEG signals should be complementary in emotion recog-nition and the proposed method can integrate dishyerenttypes of characteristics eshyectively

33 Parameter Selection Each parallel DBN-GC in theproposed ensemble deep learning model contains threeimportant parameters the weight coeiexclcient of glia eshyectvalue α the attenuation factor β and the glia threshold θese three parameters will determine the eshyect of glia cellson DBN and then ashyect the performance of the proposedmodel Since there is no self-adaptive adjustment methodthe three parameters are set manually and take 20 values inthe range of 0 to 1 respectively with the interval 005 Underthe dishyerent values of each parameter the MRA on valence

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Weight coefficient of glial effect

ArousalValence

Figure 7 MRA of the proposed ensemble deep learning model with dishyerent values of glia eshyect weight

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Attenuation factor

ArousalValence

Figure 8 MRA of the proposed ensemble deep learning model with dishyerent values of attenuation factor

8 Computational Intelligence and Neuroscience

and the MRA on arousal are taken as the nal evaluationbasis and the analysis results are shown in Figures 7ndash9

As can be seen from Figure 7 when the glia eshyect weightis between 005 and 1 the MRA on arousal as well as theMRA on valence copyuctuates continuously e highest MRAon arousal (7612) is obtained as the glia eshyect weight is setto 075 For MRA on valence the higher values will appearwhen the weight value is close to 015 or 080 Taking intoaccount these two indicators simultaneously it is appro-priate to set the weight coeiexclcient to 080

Figure 8 shows the results of arousal classication andvalence classication with dishyerent values of the attenuationfactor When the value of attenuation factor is between 005and 035 the MRA on arousal as well as the MRA on valencecopyuctuates greatly With the attenuation factor increased to04 both the MRA on arousal and the MRA on valenceincrease rapidly Once the attenuation factor exceeds 05 thetwo MRAs have been decreasing slowly us it is appro-priate to set the attenuation factor to 040 or 050

Figure 9 shows the results of arousal classication andvalence classication with dishyerent values of glia thresholdAlthough the highest value of MRA on arousal occurs withthe glia threshold set to 035 the MRA is more stable whenthe glia threshold is between 065 and 1 For the MRA onvalence its value has been rising slowly when the attenuationfactor exceeds 025 It is appropriate that the glia threshold iswithin the range of 070 to 080

4 Conclusions

In this paper we presented an ensemble deep learning modelwhich integrates parallel DBN-GCs and a discriminativeRBM for emotion recognition e interchannel correlationinformation from multichannel EEG signals which isoften neglected contains salient information regarding to

emotion states and the chain structure of glia cells in DBN-GC has the ability in mining interchannel correlation in-formation In addition the time domain characteristicsfrequency domain characteristics and time-frequencycharacteristics of EEG signals should be complementaryfor emotion recognition and the ensemble deep learningframework benets from the comprehensive fusion ofmultidomain feature abstractions e reliability of theDBN-GC and the ensemble deep learning framework-basedfusion methods is validated by the experiments based onDEAP database

Data Availability

e DEAP dataset used in our manuscript is a dataset foremotion analysis using electroencephalogram (EEG) andphysiological and video signals e DEAP dataset isavailable at httpwwweecsqmulacukmmvdatasetsdeap Anyone interested in using this dataset will have toprint sign and scan an EULA (end-user license agreement)and return it via e-mail en a username and password todownload the data will be provided e dataset was rstpresented in reference [32] DEAP a database for emotionanalysis using physiological signals

Conflicts of Interest

e authors declare that there are no concopyicts of interestregarding the publication of this paper

Acknowledgments

is paper was supported by the China National NatureScience Foundation (Nos 61502150 and 61300124) Foun-dation for University Key Teacher by Henan Province underGrant no 2015GGJS-068 Fundamental Research Funds for

ArousalValence

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Glia threshold

Figure 9 MRA of the proposed ensemble deep learning model with dishyerent values of glia threshold

Computational Intelligence and Neuroscience 9

the Universities of Henan Province under Grant noNSFRF1616 Foundation for Scientific and TechnologicalProject of Henan Province under Grant no 172102210279and Key Scientific Research Projects of Universities inHenan (No 19A520004)

References

[1] M-F Pedro M-P Arquimedes J-C Antoni andJ M B Rubio ldquoEvaluating the research in automatic emotionrecognitionrdquo IETE Technical Review vol 31 no 3 pp 220ndash232 2014

[2] M Chen X He J Yang and H Zhang ldquo3-D convolutionalrecurrent neural networks with attention model for speechemotion recognitionrdquo IEEE Signal Processing Letters vol 25no 10 pp 1440ndash1444 2018

[3] Z-T Liu Q Xie M Wu W-H Cao Y Mei and J-W MaoldquoSpeech emotion recognition based on an improved brainemotion learning modelrdquo Neurocomputing vol 309pp 145ndash156 2018

[4] C-N Anaqnostopoulos T Iliou and I Giannoukos ldquoFea-tures and classifiers for emotion recognition from speech asurvey from 2000 to 2011rdquo Artificial Intelligence Reviewvol 43 no 2 pp 155ndash177 2015

[5] S-E Kahou V Michalski K Konda et al ldquoRecurrent neuralnetworks for emotion recognition in videordquo in Proceedings of2015 ACM International Conference onMultimodal Interactionpp 467ndash474 ACM Seattle WC USA November 2015

[6] Y Huang J Yang P Liao and J Pan ldquoFusion of facial ex-pressions and EEG for multimodal emotion recognitionrdquoComputational Intelligence and Neuroscience vol 2017 Ar-ticle ID 2107451 8 pages 2017

[7] X-W Wang D Nie and B-L Lu ldquoEmotional state classi-fication from EEG data using machine learning approachrdquoNeurocomputing vol 129 pp 94ndash106 2014

[8] H J Yoon and S Y Chung ldquoEEG-based emotion estimationusing Bayesian weighted-log-posterior function and percep-tron convergence algorithmrdquo Computers in Biology andMedicine vol 43 no 12 pp 2230ndash2237 2013

[9] A-N Abeer H Manar A-O Yousef and A-W AreejldquoReview and classification of emotion recognition based onEEG brain-computer interface system research a systematicreviewrdquo Applied Sciences-Basel vol 7 no 12 p 1239 2017

[10] Y Liu and O Sourina ldquoReal-time fractal-based valence levelrecognition from EEGrdquo in Transactions on ComputationalScience XVIII vol 7848 pp101ndash120 Springer BerlinGermany 2013

[11] M Murugappan N Ramachandran and Y Sazali ldquoClassi-fication of human emotion from EEG using discrete wavelettransformrdquo Journal of Biomedical Science and Engineeringvol 3 no 4 pp 390ndash396 2010

[12] T-Y Chai W-S San C-S Tan andM Rizon ldquoClassificationof human emotions from EEG signals using statistical featuresand neural networkrdquo International Journal of IntegratedEngineering vol 1 no 3 pp 1ndash6 2009

[13] C Frantzidis C Bratsas C Papadelis E KonstantinidisC Pappas and P D Bamidis ldquoToward emotion awarecomputing an integrated approach using multichannelneurophysiological recordings and affective visual stimulirdquoIEEE Transactions on Information Technology in Biomedicinevol 14 no 3 pp 589ndash597 2010

[14] B Hjorth ldquoEEG analysis based on time domain propertiesrdquoElectroencephalography and Clinical Neurophysiology vol 29no 3 pp 306ndash310 1970

[15] K Ansari-asl G Chanel and T Pun ldquoA channel selectionmethod for EEG classification in emotion assessment basedon synchronization likelihoodrdquo in Proceedings of 15th Eu-ropean Signal Processing Conference pp 1241ndash1245 SpringerPoznan Poland September 2007

[16] E Kroupi A Yazdani and T Ebrahimi ldquoEEG correlates ofdifferent emotional states elicited during watching musicvideosrdquo in Proceedings of 4th International Conference onAffective Computing and Intelligent Interaction pp 457ndash466Springer Memphis TN USA October 2011

[17] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from EEG using higher order crossingsrdquo IEEETransactions on Information Technology in Biomedicinevol 14 no 2 pp 186ndash197 2010

[18] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from brain signals using hybrid adaptive filtering andhigher order crossings analysisrdquo IEEE Transactions on Af-fective Computing vol 1 no 2 pp 81ndash97 2010

[19] G K Verma and U S Tiwary ldquoMultimodal fusion frame-work a multiresolution approach for emotion classificationand recognition from physiological signalsrdquo NeuroImagevol 102 pp 162ndash172 2014

[20] S K Hadjidimitriou and L J Hadjileontiadis ldquoToward anEEG-based recognition of music liking using time-frequencyanalysisrdquo IEEE Transactions on Biomedical Engineeringvol 59 no 12 pp 3498ndash3510 2012

[21] M Akin ldquoComparison of wavelet transform and FFTmethodsin the analysis of EEG signalsrdquo Journal of Medical Systemsvol 26 no 3 pp 241ndash247 2002

[22] M Murugappan M Rizon S Yaacob et al ldquoEEG featureextraction for classifying emotions using FCM and FKMrdquoInternational Journal of Computers and Communicationsvol 1 no 2 pp 21ndash25 2007

[23] A Samara M-L-R Menezes and L Galway ldquoFeature ex-traction for emotion recognition and modeling using neu-rophysiological datardquo in Proceedings of 15th InternationalConference on Ubiquitous Computing and Communicationsand 2016 International Symposium on Cyberspace and Security(IUCC-CSS) pp 138ndash144 IEEE Granada Spain December2016

[24] N Jadhav R Manthalkar and Y Joshi ldquoElectroencephalography-Based emotion recognition using gray-level co-occurrence matrix featuresrdquo in Proceedings of InternationalConference on Computer Vision and Image Processingpp 335ndash343 Springer Roorkee Uttarakhand December2016

[25] N ammasan K Moriyama K-i Fukui and M NumaoldquoFamiliarity effects in EEG-based emotion recognitionrdquo BrainInformatics vol 4 no 1 pp 39ndash50 2016

[26] D Wang and Y Shang ldquoModeling physiological data withdeep belief networksrdquo International Journal of Informationand Education Technology vol 3 no 5 pp 505ndash511 2013

[27] H Xu andK-N Plataniotis ldquoAffective states classification usingEEG and semi-supervised deep learning approachesrdquo in Pro-ceedings of IEEE 18th International Workshop on MultimediaSignal pp 1ndash6 IEEE Montreal Canada September 2016

[28] X Li D Song P Zhang Y Hou and B Hu ldquoDeep fusion ofmulti-channel neurophysiological signal for emotion recog-nition and monitoringrdquo International Journal of Data Miningand Bioinformatics vol 18 no 1 pp 1ndash27 2017

[29] S Tripathi S Acharya R-D Sharma et al ldquoUsing deep andconvolutional neural networks for accurate emotion classi-fication on DEAP datasetrdquo in Proceedings of Twenty-NinthAAAI Conference on Innovative Applications of Artificial

10 Computational Intelligence and Neuroscience

Intelligence pp 4746ndash4752 AAAI San Francisco CA USAFebruary 2017

[30] C Ikuta Y Uwate and Y Nishio ldquoMulti-layer perceptronwith positive and negative pulse glia chain for solving two-spirals problemrdquo in Proceedings of 2012 International JointConference on Neural Networks pp 1ndash6 IEEE BrisbaneAustralia June 2012

[31] Z-Q Geng and Y-K Zhang ldquoAn improved deep beliefnetwork inspired by glia chainsrdquo Acta Automatica Sinicavol 42 no 6 pp 943ndash952 2016

[32] S Koelstra C Muehl M Soleymani et al ldquoDEAP a databasefor emotion analysis using physiological signalsrdquo IEEETransaction on Affective Computing vol 3 no 1 pp 18ndash312012

[33] M Khezri M Firoozabadi and A R Sharafat ldquoReliableemotion recognition system based on dynamic adaptive fu-sion of forehead biopotentials and physiological signalsrdquoComputer Methods and Programs in Biomedicine vol 122no 2 pp 149ndash164 2015

[34] J Atkinson and D Campos ldquoImproving BCI-based emotionrecognition by combining EEG feature selection and kernelclassifiersrdquo Expert Systems with Applications vol 47pp 35ndash41 2016

[35] C Li C Xu and Z Feng ldquoAnalysis of physiological foremotion recognition with the IRS modelrdquo Neurocomputingvol 178 pp 103ndash111 2016

[36] Z Yin M-Y Zhao Y-X Wang J Yang and J ZhangldquoRecognition of emotions using multimodal physiologicalsignals and an ensemble deep learning modelrdquo ComputerMethods and Programs in Biomedicine vol 140 pp 93ndash1102017

[37] X Li D-W Song P Zhang et al ldquoEmotion recognition frommulti-channel EEG data through convolutional recurrentneural networkrdquo in Proceedings of IEEE International Con-ference on Bioinformatics and Biomedicine pp 352ndash359 IEEEShenzhen China December 2016

[38] X Li P Zhang and D-W Song ldquoEEG based emotionidentification using unsupervised deep feature learningrdquo inProceedings of SIGIR2015 Workshop on Neuro-PhysiologicalMethods in IR Research August 2015

Computational Intelligence and Neuroscience 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 8: RecognitionofEmotionsUsingMultichannelEEGDataand DBN-GC …downloads.hindawi.com/journals/cin/2018/9750904.pdf · 2019-07-30 · types of deep learning approaches, stacked denoising

is may be due to that the CNN model proposed by Tri-pathi et al benets from suiexclcient training and validatinginstances In addition the performance of the ensemble deeplearning model also outperforms the three DBN-GCs inTable 3 which are trained by a single-feature subset isindicates that time domain characteristics frequency do-main characteristics and time-frequency characteristics ofEEG signals should be complementary in emotion recog-nition and the proposed method can integrate dishyerenttypes of characteristics eshyectively

33 Parameter Selection Each parallel DBN-GC in theproposed ensemble deep learning model contains threeimportant parameters the weight coeiexclcient of glia eshyectvalue α the attenuation factor β and the glia threshold θese three parameters will determine the eshyect of glia cellson DBN and then ashyect the performance of the proposedmodel Since there is no self-adaptive adjustment methodthe three parameters are set manually and take 20 values inthe range of 0 to 1 respectively with the interval 005 Underthe dishyerent values of each parameter the MRA on valence

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Weight coefficient of glial effect

ArousalValence

Figure 7 MRA of the proposed ensemble deep learning model with dishyerent values of glia eshyect weight

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Attenuation factor

ArousalValence

Figure 8 MRA of the proposed ensemble deep learning model with dishyerent values of attenuation factor

8 Computational Intelligence and Neuroscience

and the MRA on arousal are taken as the nal evaluationbasis and the analysis results are shown in Figures 7ndash9

As can be seen from Figure 7 when the glia eshyect weightis between 005 and 1 the MRA on arousal as well as theMRA on valence copyuctuates continuously e highest MRAon arousal (7612) is obtained as the glia eshyect weight is setto 075 For MRA on valence the higher values will appearwhen the weight value is close to 015 or 080 Taking intoaccount these two indicators simultaneously it is appro-priate to set the weight coeiexclcient to 080

Figure 8 shows the results of arousal classication andvalence classication with dishyerent values of the attenuationfactor When the value of attenuation factor is between 005and 035 the MRA on arousal as well as the MRA on valencecopyuctuates greatly With the attenuation factor increased to04 both the MRA on arousal and the MRA on valenceincrease rapidly Once the attenuation factor exceeds 05 thetwo MRAs have been decreasing slowly us it is appro-priate to set the attenuation factor to 040 or 050

Figure 9 shows the results of arousal classication andvalence classication with dishyerent values of glia thresholdAlthough the highest value of MRA on arousal occurs withthe glia threshold set to 035 the MRA is more stable whenthe glia threshold is between 065 and 1 For the MRA onvalence its value has been rising slowly when the attenuationfactor exceeds 025 It is appropriate that the glia threshold iswithin the range of 070 to 080

4 Conclusions

In this paper we presented an ensemble deep learning modelwhich integrates parallel DBN-GCs and a discriminativeRBM for emotion recognition e interchannel correlationinformation from multichannel EEG signals which isoften neglected contains salient information regarding to

emotion states and the chain structure of glia cells in DBN-GC has the ability in mining interchannel correlation in-formation In addition the time domain characteristicsfrequency domain characteristics and time-frequencycharacteristics of EEG signals should be complementaryfor emotion recognition and the ensemble deep learningframework benets from the comprehensive fusion ofmultidomain feature abstractions e reliability of theDBN-GC and the ensemble deep learning framework-basedfusion methods is validated by the experiments based onDEAP database

Data Availability

e DEAP dataset used in our manuscript is a dataset foremotion analysis using electroencephalogram (EEG) andphysiological and video signals e DEAP dataset isavailable at httpwwweecsqmulacukmmvdatasetsdeap Anyone interested in using this dataset will have toprint sign and scan an EULA (end-user license agreement)and return it via e-mail en a username and password todownload the data will be provided e dataset was rstpresented in reference [32] DEAP a database for emotionanalysis using physiological signals

Conflicts of Interest

e authors declare that there are no concopyicts of interestregarding the publication of this paper

Acknowledgments

is paper was supported by the China National NatureScience Foundation (Nos 61502150 and 61300124) Foun-dation for University Key Teacher by Henan Province underGrant no 2015GGJS-068 Fundamental Research Funds for

ArousalValence

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Glia threshold

Figure 9 MRA of the proposed ensemble deep learning model with dishyerent values of glia threshold

Computational Intelligence and Neuroscience 9

the Universities of Henan Province under Grant noNSFRF1616 Foundation for Scientific and TechnologicalProject of Henan Province under Grant no 172102210279and Key Scientific Research Projects of Universities inHenan (No 19A520004)

References

[1] M-F Pedro M-P Arquimedes J-C Antoni andJ M B Rubio ldquoEvaluating the research in automatic emotionrecognitionrdquo IETE Technical Review vol 31 no 3 pp 220ndash232 2014

[2] M Chen X He J Yang and H Zhang ldquo3-D convolutionalrecurrent neural networks with attention model for speechemotion recognitionrdquo IEEE Signal Processing Letters vol 25no 10 pp 1440ndash1444 2018

[3] Z-T Liu Q Xie M Wu W-H Cao Y Mei and J-W MaoldquoSpeech emotion recognition based on an improved brainemotion learning modelrdquo Neurocomputing vol 309pp 145ndash156 2018

[4] C-N Anaqnostopoulos T Iliou and I Giannoukos ldquoFea-tures and classifiers for emotion recognition from speech asurvey from 2000 to 2011rdquo Artificial Intelligence Reviewvol 43 no 2 pp 155ndash177 2015

[5] S-E Kahou V Michalski K Konda et al ldquoRecurrent neuralnetworks for emotion recognition in videordquo in Proceedings of2015 ACM International Conference onMultimodal Interactionpp 467ndash474 ACM Seattle WC USA November 2015

[6] Y Huang J Yang P Liao and J Pan ldquoFusion of facial ex-pressions and EEG for multimodal emotion recognitionrdquoComputational Intelligence and Neuroscience vol 2017 Ar-ticle ID 2107451 8 pages 2017

[7] X-W Wang D Nie and B-L Lu ldquoEmotional state classi-fication from EEG data using machine learning approachrdquoNeurocomputing vol 129 pp 94ndash106 2014

[8] H J Yoon and S Y Chung ldquoEEG-based emotion estimationusing Bayesian weighted-log-posterior function and percep-tron convergence algorithmrdquo Computers in Biology andMedicine vol 43 no 12 pp 2230ndash2237 2013

[9] A-N Abeer H Manar A-O Yousef and A-W AreejldquoReview and classification of emotion recognition based onEEG brain-computer interface system research a systematicreviewrdquo Applied Sciences-Basel vol 7 no 12 p 1239 2017

[10] Y Liu and O Sourina ldquoReal-time fractal-based valence levelrecognition from EEGrdquo in Transactions on ComputationalScience XVIII vol 7848 pp101ndash120 Springer BerlinGermany 2013

[11] M Murugappan N Ramachandran and Y Sazali ldquoClassi-fication of human emotion from EEG using discrete wavelettransformrdquo Journal of Biomedical Science and Engineeringvol 3 no 4 pp 390ndash396 2010

[12] T-Y Chai W-S San C-S Tan andM Rizon ldquoClassificationof human emotions from EEG signals using statistical featuresand neural networkrdquo International Journal of IntegratedEngineering vol 1 no 3 pp 1ndash6 2009

[13] C Frantzidis C Bratsas C Papadelis E KonstantinidisC Pappas and P D Bamidis ldquoToward emotion awarecomputing an integrated approach using multichannelneurophysiological recordings and affective visual stimulirdquoIEEE Transactions on Information Technology in Biomedicinevol 14 no 3 pp 589ndash597 2010

[14] B Hjorth ldquoEEG analysis based on time domain propertiesrdquoElectroencephalography and Clinical Neurophysiology vol 29no 3 pp 306ndash310 1970

[15] K Ansari-asl G Chanel and T Pun ldquoA channel selectionmethod for EEG classification in emotion assessment basedon synchronization likelihoodrdquo in Proceedings of 15th Eu-ropean Signal Processing Conference pp 1241ndash1245 SpringerPoznan Poland September 2007

[16] E Kroupi A Yazdani and T Ebrahimi ldquoEEG correlates ofdifferent emotional states elicited during watching musicvideosrdquo in Proceedings of 4th International Conference onAffective Computing and Intelligent Interaction pp 457ndash466Springer Memphis TN USA October 2011

[17] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from EEG using higher order crossingsrdquo IEEETransactions on Information Technology in Biomedicinevol 14 no 2 pp 186ndash197 2010

[18] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from brain signals using hybrid adaptive filtering andhigher order crossings analysisrdquo IEEE Transactions on Af-fective Computing vol 1 no 2 pp 81ndash97 2010

[19] G K Verma and U S Tiwary ldquoMultimodal fusion frame-work a multiresolution approach for emotion classificationand recognition from physiological signalsrdquo NeuroImagevol 102 pp 162ndash172 2014

[20] S K Hadjidimitriou and L J Hadjileontiadis ldquoToward anEEG-based recognition of music liking using time-frequencyanalysisrdquo IEEE Transactions on Biomedical Engineeringvol 59 no 12 pp 3498ndash3510 2012

[21] M Akin ldquoComparison of wavelet transform and FFTmethodsin the analysis of EEG signalsrdquo Journal of Medical Systemsvol 26 no 3 pp 241ndash247 2002

[22] M Murugappan M Rizon S Yaacob et al ldquoEEG featureextraction for classifying emotions using FCM and FKMrdquoInternational Journal of Computers and Communicationsvol 1 no 2 pp 21ndash25 2007

[23] A Samara M-L-R Menezes and L Galway ldquoFeature ex-traction for emotion recognition and modeling using neu-rophysiological datardquo in Proceedings of 15th InternationalConference on Ubiquitous Computing and Communicationsand 2016 International Symposium on Cyberspace and Security(IUCC-CSS) pp 138ndash144 IEEE Granada Spain December2016

[24] N Jadhav R Manthalkar and Y Joshi ldquoElectroencephalography-Based emotion recognition using gray-level co-occurrence matrix featuresrdquo in Proceedings of InternationalConference on Computer Vision and Image Processingpp 335ndash343 Springer Roorkee Uttarakhand December2016

[25] N ammasan K Moriyama K-i Fukui and M NumaoldquoFamiliarity effects in EEG-based emotion recognitionrdquo BrainInformatics vol 4 no 1 pp 39ndash50 2016

[26] D Wang and Y Shang ldquoModeling physiological data withdeep belief networksrdquo International Journal of Informationand Education Technology vol 3 no 5 pp 505ndash511 2013

[27] H Xu andK-N Plataniotis ldquoAffective states classification usingEEG and semi-supervised deep learning approachesrdquo in Pro-ceedings of IEEE 18th International Workshop on MultimediaSignal pp 1ndash6 IEEE Montreal Canada September 2016

[28] X Li D Song P Zhang Y Hou and B Hu ldquoDeep fusion ofmulti-channel neurophysiological signal for emotion recog-nition and monitoringrdquo International Journal of Data Miningand Bioinformatics vol 18 no 1 pp 1ndash27 2017

[29] S Tripathi S Acharya R-D Sharma et al ldquoUsing deep andconvolutional neural networks for accurate emotion classi-fication on DEAP datasetrdquo in Proceedings of Twenty-NinthAAAI Conference on Innovative Applications of Artificial

10 Computational Intelligence and Neuroscience

Intelligence pp 4746ndash4752 AAAI San Francisco CA USAFebruary 2017

[30] C Ikuta Y Uwate and Y Nishio ldquoMulti-layer perceptronwith positive and negative pulse glia chain for solving two-spirals problemrdquo in Proceedings of 2012 International JointConference on Neural Networks pp 1ndash6 IEEE BrisbaneAustralia June 2012

[31] Z-Q Geng and Y-K Zhang ldquoAn improved deep beliefnetwork inspired by glia chainsrdquo Acta Automatica Sinicavol 42 no 6 pp 943ndash952 2016

[32] S Koelstra C Muehl M Soleymani et al ldquoDEAP a databasefor emotion analysis using physiological signalsrdquo IEEETransaction on Affective Computing vol 3 no 1 pp 18ndash312012

[33] M Khezri M Firoozabadi and A R Sharafat ldquoReliableemotion recognition system based on dynamic adaptive fu-sion of forehead biopotentials and physiological signalsrdquoComputer Methods and Programs in Biomedicine vol 122no 2 pp 149ndash164 2015

[34] J Atkinson and D Campos ldquoImproving BCI-based emotionrecognition by combining EEG feature selection and kernelclassifiersrdquo Expert Systems with Applications vol 47pp 35ndash41 2016

[35] C Li C Xu and Z Feng ldquoAnalysis of physiological foremotion recognition with the IRS modelrdquo Neurocomputingvol 178 pp 103ndash111 2016

[36] Z Yin M-Y Zhao Y-X Wang J Yang and J ZhangldquoRecognition of emotions using multimodal physiologicalsignals and an ensemble deep learning modelrdquo ComputerMethods and Programs in Biomedicine vol 140 pp 93ndash1102017

[37] X Li D-W Song P Zhang et al ldquoEmotion recognition frommulti-channel EEG data through convolutional recurrentneural networkrdquo in Proceedings of IEEE International Con-ference on Bioinformatics and Biomedicine pp 352ndash359 IEEEShenzhen China December 2016

[38] X Li P Zhang and D-W Song ldquoEEG based emotionidentification using unsupervised deep feature learningrdquo inProceedings of SIGIR2015 Workshop on Neuro-PhysiologicalMethods in IR Research August 2015

Computational Intelligence and Neuroscience 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 9: RecognitionofEmotionsUsingMultichannelEEGDataand DBN-GC …downloads.hindawi.com/journals/cin/2018/9750904.pdf · 2019-07-30 · types of deep learning approaches, stacked denoising

and the MRA on arousal are taken as the nal evaluationbasis and the analysis results are shown in Figures 7ndash9

As can be seen from Figure 7 when the glia eshyect weightis between 005 and 1 the MRA on arousal as well as theMRA on valence copyuctuates continuously e highest MRAon arousal (7612) is obtained as the glia eshyect weight is setto 075 For MRA on valence the higher values will appearwhen the weight value is close to 015 or 080 Taking intoaccount these two indicators simultaneously it is appro-priate to set the weight coeiexclcient to 080

Figure 8 shows the results of arousal classication andvalence classication with dishyerent values of the attenuationfactor When the value of attenuation factor is between 005and 035 the MRA on arousal as well as the MRA on valencecopyuctuates greatly With the attenuation factor increased to04 both the MRA on arousal and the MRA on valenceincrease rapidly Once the attenuation factor exceeds 05 thetwo MRAs have been decreasing slowly us it is appro-priate to set the attenuation factor to 040 or 050

Figure 9 shows the results of arousal classication andvalence classication with dishyerent values of glia thresholdAlthough the highest value of MRA on arousal occurs withthe glia threshold set to 035 the MRA is more stable whenthe glia threshold is between 065 and 1 For the MRA onvalence its value has been rising slowly when the attenuationfactor exceeds 025 It is appropriate that the glia threshold iswithin the range of 070 to 080

4 Conclusions

In this paper we presented an ensemble deep learning modelwhich integrates parallel DBN-GCs and a discriminativeRBM for emotion recognition e interchannel correlationinformation from multichannel EEG signals which isoften neglected contains salient information regarding to

emotion states and the chain structure of glia cells in DBN-GC has the ability in mining interchannel correlation in-formation In addition the time domain characteristicsfrequency domain characteristics and time-frequencycharacteristics of EEG signals should be complementaryfor emotion recognition and the ensemble deep learningframework benets from the comprehensive fusion ofmultidomain feature abstractions e reliability of theDBN-GC and the ensemble deep learning framework-basedfusion methods is validated by the experiments based onDEAP database

Data Availability

e DEAP dataset used in our manuscript is a dataset foremotion analysis using electroencephalogram (EEG) andphysiological and video signals e DEAP dataset isavailable at httpwwweecsqmulacukmmvdatasetsdeap Anyone interested in using this dataset will have toprint sign and scan an EULA (end-user license agreement)and return it via e-mail en a username and password todownload the data will be provided e dataset was rstpresented in reference [32] DEAP a database for emotionanalysis using physiological signals

Conflicts of Interest

e authors declare that there are no concopyicts of interestregarding the publication of this paper

Acknowledgments

is paper was supported by the China National NatureScience Foundation (Nos 61502150 and 61300124) Foun-dation for University Key Teacher by Henan Province underGrant no 2015GGJS-068 Fundamental Research Funds for

ArousalValence

005 01 015 02 025 03 035 04 045 05 055 06 065 07 075 08 085 09 095 170

71

72

73

74

75

76

77

78

79

80

Accu

racy

()

Glia threshold

Figure 9 MRA of the proposed ensemble deep learning model with dishyerent values of glia threshold

Computational Intelligence and Neuroscience 9

the Universities of Henan Province under Grant noNSFRF1616 Foundation for Scientific and TechnologicalProject of Henan Province under Grant no 172102210279and Key Scientific Research Projects of Universities inHenan (No 19A520004)

References

[1] M-F Pedro M-P Arquimedes J-C Antoni andJ M B Rubio ldquoEvaluating the research in automatic emotionrecognitionrdquo IETE Technical Review vol 31 no 3 pp 220ndash232 2014

[2] M Chen X He J Yang and H Zhang ldquo3-D convolutionalrecurrent neural networks with attention model for speechemotion recognitionrdquo IEEE Signal Processing Letters vol 25no 10 pp 1440ndash1444 2018

[3] Z-T Liu Q Xie M Wu W-H Cao Y Mei and J-W MaoldquoSpeech emotion recognition based on an improved brainemotion learning modelrdquo Neurocomputing vol 309pp 145ndash156 2018

[4] C-N Anaqnostopoulos T Iliou and I Giannoukos ldquoFea-tures and classifiers for emotion recognition from speech asurvey from 2000 to 2011rdquo Artificial Intelligence Reviewvol 43 no 2 pp 155ndash177 2015

[5] S-E Kahou V Michalski K Konda et al ldquoRecurrent neuralnetworks for emotion recognition in videordquo in Proceedings of2015 ACM International Conference onMultimodal Interactionpp 467ndash474 ACM Seattle WC USA November 2015

[6] Y Huang J Yang P Liao and J Pan ldquoFusion of facial ex-pressions and EEG for multimodal emotion recognitionrdquoComputational Intelligence and Neuroscience vol 2017 Ar-ticle ID 2107451 8 pages 2017

[7] X-W Wang D Nie and B-L Lu ldquoEmotional state classi-fication from EEG data using machine learning approachrdquoNeurocomputing vol 129 pp 94ndash106 2014

[8] H J Yoon and S Y Chung ldquoEEG-based emotion estimationusing Bayesian weighted-log-posterior function and percep-tron convergence algorithmrdquo Computers in Biology andMedicine vol 43 no 12 pp 2230ndash2237 2013

[9] A-N Abeer H Manar A-O Yousef and A-W AreejldquoReview and classification of emotion recognition based onEEG brain-computer interface system research a systematicreviewrdquo Applied Sciences-Basel vol 7 no 12 p 1239 2017

[10] Y Liu and O Sourina ldquoReal-time fractal-based valence levelrecognition from EEGrdquo in Transactions on ComputationalScience XVIII vol 7848 pp101ndash120 Springer BerlinGermany 2013

[11] M Murugappan N Ramachandran and Y Sazali ldquoClassi-fication of human emotion from EEG using discrete wavelettransformrdquo Journal of Biomedical Science and Engineeringvol 3 no 4 pp 390ndash396 2010

[12] T-Y Chai W-S San C-S Tan andM Rizon ldquoClassificationof human emotions from EEG signals using statistical featuresand neural networkrdquo International Journal of IntegratedEngineering vol 1 no 3 pp 1ndash6 2009

[13] C Frantzidis C Bratsas C Papadelis E KonstantinidisC Pappas and P D Bamidis ldquoToward emotion awarecomputing an integrated approach using multichannelneurophysiological recordings and affective visual stimulirdquoIEEE Transactions on Information Technology in Biomedicinevol 14 no 3 pp 589ndash597 2010

[14] B Hjorth ldquoEEG analysis based on time domain propertiesrdquoElectroencephalography and Clinical Neurophysiology vol 29no 3 pp 306ndash310 1970

[15] K Ansari-asl G Chanel and T Pun ldquoA channel selectionmethod for EEG classification in emotion assessment basedon synchronization likelihoodrdquo in Proceedings of 15th Eu-ropean Signal Processing Conference pp 1241ndash1245 SpringerPoznan Poland September 2007

[16] E Kroupi A Yazdani and T Ebrahimi ldquoEEG correlates ofdifferent emotional states elicited during watching musicvideosrdquo in Proceedings of 4th International Conference onAffective Computing and Intelligent Interaction pp 457ndash466Springer Memphis TN USA October 2011

[17] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from EEG using higher order crossingsrdquo IEEETransactions on Information Technology in Biomedicinevol 14 no 2 pp 186ndash197 2010

[18] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from brain signals using hybrid adaptive filtering andhigher order crossings analysisrdquo IEEE Transactions on Af-fective Computing vol 1 no 2 pp 81ndash97 2010

[19] G K Verma and U S Tiwary ldquoMultimodal fusion frame-work a multiresolution approach for emotion classificationand recognition from physiological signalsrdquo NeuroImagevol 102 pp 162ndash172 2014

[20] S K Hadjidimitriou and L J Hadjileontiadis ldquoToward anEEG-based recognition of music liking using time-frequencyanalysisrdquo IEEE Transactions on Biomedical Engineeringvol 59 no 12 pp 3498ndash3510 2012

[21] M Akin ldquoComparison of wavelet transform and FFTmethodsin the analysis of EEG signalsrdquo Journal of Medical Systemsvol 26 no 3 pp 241ndash247 2002

[22] M Murugappan M Rizon S Yaacob et al ldquoEEG featureextraction for classifying emotions using FCM and FKMrdquoInternational Journal of Computers and Communicationsvol 1 no 2 pp 21ndash25 2007

[23] A Samara M-L-R Menezes and L Galway ldquoFeature ex-traction for emotion recognition and modeling using neu-rophysiological datardquo in Proceedings of 15th InternationalConference on Ubiquitous Computing and Communicationsand 2016 International Symposium on Cyberspace and Security(IUCC-CSS) pp 138ndash144 IEEE Granada Spain December2016

[24] N Jadhav R Manthalkar and Y Joshi ldquoElectroencephalography-Based emotion recognition using gray-level co-occurrence matrix featuresrdquo in Proceedings of InternationalConference on Computer Vision and Image Processingpp 335ndash343 Springer Roorkee Uttarakhand December2016

[25] N ammasan K Moriyama K-i Fukui and M NumaoldquoFamiliarity effects in EEG-based emotion recognitionrdquo BrainInformatics vol 4 no 1 pp 39ndash50 2016

[26] D Wang and Y Shang ldquoModeling physiological data withdeep belief networksrdquo International Journal of Informationand Education Technology vol 3 no 5 pp 505ndash511 2013

[27] H Xu andK-N Plataniotis ldquoAffective states classification usingEEG and semi-supervised deep learning approachesrdquo in Pro-ceedings of IEEE 18th International Workshop on MultimediaSignal pp 1ndash6 IEEE Montreal Canada September 2016

[28] X Li D Song P Zhang Y Hou and B Hu ldquoDeep fusion ofmulti-channel neurophysiological signal for emotion recog-nition and monitoringrdquo International Journal of Data Miningand Bioinformatics vol 18 no 1 pp 1ndash27 2017

[29] S Tripathi S Acharya R-D Sharma et al ldquoUsing deep andconvolutional neural networks for accurate emotion classi-fication on DEAP datasetrdquo in Proceedings of Twenty-NinthAAAI Conference on Innovative Applications of Artificial

10 Computational Intelligence and Neuroscience

Intelligence pp 4746ndash4752 AAAI San Francisco CA USAFebruary 2017

[30] C Ikuta Y Uwate and Y Nishio ldquoMulti-layer perceptronwith positive and negative pulse glia chain for solving two-spirals problemrdquo in Proceedings of 2012 International JointConference on Neural Networks pp 1ndash6 IEEE BrisbaneAustralia June 2012

[31] Z-Q Geng and Y-K Zhang ldquoAn improved deep beliefnetwork inspired by glia chainsrdquo Acta Automatica Sinicavol 42 no 6 pp 943ndash952 2016

[32] S Koelstra C Muehl M Soleymani et al ldquoDEAP a databasefor emotion analysis using physiological signalsrdquo IEEETransaction on Affective Computing vol 3 no 1 pp 18ndash312012

[33] M Khezri M Firoozabadi and A R Sharafat ldquoReliableemotion recognition system based on dynamic adaptive fu-sion of forehead biopotentials and physiological signalsrdquoComputer Methods and Programs in Biomedicine vol 122no 2 pp 149ndash164 2015

[34] J Atkinson and D Campos ldquoImproving BCI-based emotionrecognition by combining EEG feature selection and kernelclassifiersrdquo Expert Systems with Applications vol 47pp 35ndash41 2016

[35] C Li C Xu and Z Feng ldquoAnalysis of physiological foremotion recognition with the IRS modelrdquo Neurocomputingvol 178 pp 103ndash111 2016

[36] Z Yin M-Y Zhao Y-X Wang J Yang and J ZhangldquoRecognition of emotions using multimodal physiologicalsignals and an ensemble deep learning modelrdquo ComputerMethods and Programs in Biomedicine vol 140 pp 93ndash1102017

[37] X Li D-W Song P Zhang et al ldquoEmotion recognition frommulti-channel EEG data through convolutional recurrentneural networkrdquo in Proceedings of IEEE International Con-ference on Bioinformatics and Biomedicine pp 352ndash359 IEEEShenzhen China December 2016

[38] X Li P Zhang and D-W Song ldquoEEG based emotionidentification using unsupervised deep feature learningrdquo inProceedings of SIGIR2015 Workshop on Neuro-PhysiologicalMethods in IR Research August 2015

Computational Intelligence and Neuroscience 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 10: RecognitionofEmotionsUsingMultichannelEEGDataand DBN-GC …downloads.hindawi.com/journals/cin/2018/9750904.pdf · 2019-07-30 · types of deep learning approaches, stacked denoising

the Universities of Henan Province under Grant noNSFRF1616 Foundation for Scientific and TechnologicalProject of Henan Province under Grant no 172102210279and Key Scientific Research Projects of Universities inHenan (No 19A520004)

References

[1] M-F Pedro M-P Arquimedes J-C Antoni andJ M B Rubio ldquoEvaluating the research in automatic emotionrecognitionrdquo IETE Technical Review vol 31 no 3 pp 220ndash232 2014

[2] M Chen X He J Yang and H Zhang ldquo3-D convolutionalrecurrent neural networks with attention model for speechemotion recognitionrdquo IEEE Signal Processing Letters vol 25no 10 pp 1440ndash1444 2018

[3] Z-T Liu Q Xie M Wu W-H Cao Y Mei and J-W MaoldquoSpeech emotion recognition based on an improved brainemotion learning modelrdquo Neurocomputing vol 309pp 145ndash156 2018

[4] C-N Anaqnostopoulos T Iliou and I Giannoukos ldquoFea-tures and classifiers for emotion recognition from speech asurvey from 2000 to 2011rdquo Artificial Intelligence Reviewvol 43 no 2 pp 155ndash177 2015

[5] S-E Kahou V Michalski K Konda et al ldquoRecurrent neuralnetworks for emotion recognition in videordquo in Proceedings of2015 ACM International Conference onMultimodal Interactionpp 467ndash474 ACM Seattle WC USA November 2015

[6] Y Huang J Yang P Liao and J Pan ldquoFusion of facial ex-pressions and EEG for multimodal emotion recognitionrdquoComputational Intelligence and Neuroscience vol 2017 Ar-ticle ID 2107451 8 pages 2017

[7] X-W Wang D Nie and B-L Lu ldquoEmotional state classi-fication from EEG data using machine learning approachrdquoNeurocomputing vol 129 pp 94ndash106 2014

[8] H J Yoon and S Y Chung ldquoEEG-based emotion estimationusing Bayesian weighted-log-posterior function and percep-tron convergence algorithmrdquo Computers in Biology andMedicine vol 43 no 12 pp 2230ndash2237 2013

[9] A-N Abeer H Manar A-O Yousef and A-W AreejldquoReview and classification of emotion recognition based onEEG brain-computer interface system research a systematicreviewrdquo Applied Sciences-Basel vol 7 no 12 p 1239 2017

[10] Y Liu and O Sourina ldquoReal-time fractal-based valence levelrecognition from EEGrdquo in Transactions on ComputationalScience XVIII vol 7848 pp101ndash120 Springer BerlinGermany 2013

[11] M Murugappan N Ramachandran and Y Sazali ldquoClassi-fication of human emotion from EEG using discrete wavelettransformrdquo Journal of Biomedical Science and Engineeringvol 3 no 4 pp 390ndash396 2010

[12] T-Y Chai W-S San C-S Tan andM Rizon ldquoClassificationof human emotions from EEG signals using statistical featuresand neural networkrdquo International Journal of IntegratedEngineering vol 1 no 3 pp 1ndash6 2009

[13] C Frantzidis C Bratsas C Papadelis E KonstantinidisC Pappas and P D Bamidis ldquoToward emotion awarecomputing an integrated approach using multichannelneurophysiological recordings and affective visual stimulirdquoIEEE Transactions on Information Technology in Biomedicinevol 14 no 3 pp 589ndash597 2010

[14] B Hjorth ldquoEEG analysis based on time domain propertiesrdquoElectroencephalography and Clinical Neurophysiology vol 29no 3 pp 306ndash310 1970

[15] K Ansari-asl G Chanel and T Pun ldquoA channel selectionmethod for EEG classification in emotion assessment basedon synchronization likelihoodrdquo in Proceedings of 15th Eu-ropean Signal Processing Conference pp 1241ndash1245 SpringerPoznan Poland September 2007

[16] E Kroupi A Yazdani and T Ebrahimi ldquoEEG correlates ofdifferent emotional states elicited during watching musicvideosrdquo in Proceedings of 4th International Conference onAffective Computing and Intelligent Interaction pp 457ndash466Springer Memphis TN USA October 2011

[17] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from EEG using higher order crossingsrdquo IEEETransactions on Information Technology in Biomedicinevol 14 no 2 pp 186ndash197 2010

[18] P C Petrantonakis and L J Hadjileontiadis ldquoEmotion rec-ognition from brain signals using hybrid adaptive filtering andhigher order crossings analysisrdquo IEEE Transactions on Af-fective Computing vol 1 no 2 pp 81ndash97 2010

[19] G K Verma and U S Tiwary ldquoMultimodal fusion frame-work a multiresolution approach for emotion classificationand recognition from physiological signalsrdquo NeuroImagevol 102 pp 162ndash172 2014

[20] S K Hadjidimitriou and L J Hadjileontiadis ldquoToward anEEG-based recognition of music liking using time-frequencyanalysisrdquo IEEE Transactions on Biomedical Engineeringvol 59 no 12 pp 3498ndash3510 2012

[21] M Akin ldquoComparison of wavelet transform and FFTmethodsin the analysis of EEG signalsrdquo Journal of Medical Systemsvol 26 no 3 pp 241ndash247 2002

[22] M Murugappan M Rizon S Yaacob et al ldquoEEG featureextraction for classifying emotions using FCM and FKMrdquoInternational Journal of Computers and Communicationsvol 1 no 2 pp 21ndash25 2007

[23] A Samara M-L-R Menezes and L Galway ldquoFeature ex-traction for emotion recognition and modeling using neu-rophysiological datardquo in Proceedings of 15th InternationalConference on Ubiquitous Computing and Communicationsand 2016 International Symposium on Cyberspace and Security(IUCC-CSS) pp 138ndash144 IEEE Granada Spain December2016

[24] N Jadhav R Manthalkar and Y Joshi ldquoElectroencephalography-Based emotion recognition using gray-level co-occurrence matrix featuresrdquo in Proceedings of InternationalConference on Computer Vision and Image Processingpp 335ndash343 Springer Roorkee Uttarakhand December2016

[25] N ammasan K Moriyama K-i Fukui and M NumaoldquoFamiliarity effects in EEG-based emotion recognitionrdquo BrainInformatics vol 4 no 1 pp 39ndash50 2016

[26] D Wang and Y Shang ldquoModeling physiological data withdeep belief networksrdquo International Journal of Informationand Education Technology vol 3 no 5 pp 505ndash511 2013

[27] H Xu andK-N Plataniotis ldquoAffective states classification usingEEG and semi-supervised deep learning approachesrdquo in Pro-ceedings of IEEE 18th International Workshop on MultimediaSignal pp 1ndash6 IEEE Montreal Canada September 2016

[28] X Li D Song P Zhang Y Hou and B Hu ldquoDeep fusion ofmulti-channel neurophysiological signal for emotion recog-nition and monitoringrdquo International Journal of Data Miningand Bioinformatics vol 18 no 1 pp 1ndash27 2017

[29] S Tripathi S Acharya R-D Sharma et al ldquoUsing deep andconvolutional neural networks for accurate emotion classi-fication on DEAP datasetrdquo in Proceedings of Twenty-NinthAAAI Conference on Innovative Applications of Artificial

10 Computational Intelligence and Neuroscience

Intelligence pp 4746ndash4752 AAAI San Francisco CA USAFebruary 2017

[30] C Ikuta Y Uwate and Y Nishio ldquoMulti-layer perceptronwith positive and negative pulse glia chain for solving two-spirals problemrdquo in Proceedings of 2012 International JointConference on Neural Networks pp 1ndash6 IEEE BrisbaneAustralia June 2012

[31] Z-Q Geng and Y-K Zhang ldquoAn improved deep beliefnetwork inspired by glia chainsrdquo Acta Automatica Sinicavol 42 no 6 pp 943ndash952 2016

[32] S Koelstra C Muehl M Soleymani et al ldquoDEAP a databasefor emotion analysis using physiological signalsrdquo IEEETransaction on Affective Computing vol 3 no 1 pp 18ndash312012

[33] M Khezri M Firoozabadi and A R Sharafat ldquoReliableemotion recognition system based on dynamic adaptive fu-sion of forehead biopotentials and physiological signalsrdquoComputer Methods and Programs in Biomedicine vol 122no 2 pp 149ndash164 2015

[34] J Atkinson and D Campos ldquoImproving BCI-based emotionrecognition by combining EEG feature selection and kernelclassifiersrdquo Expert Systems with Applications vol 47pp 35ndash41 2016

[35] C Li C Xu and Z Feng ldquoAnalysis of physiological foremotion recognition with the IRS modelrdquo Neurocomputingvol 178 pp 103ndash111 2016

[36] Z Yin M-Y Zhao Y-X Wang J Yang and J ZhangldquoRecognition of emotions using multimodal physiologicalsignals and an ensemble deep learning modelrdquo ComputerMethods and Programs in Biomedicine vol 140 pp 93ndash1102017

[37] X Li D-W Song P Zhang et al ldquoEmotion recognition frommulti-channel EEG data through convolutional recurrentneural networkrdquo in Proceedings of IEEE International Con-ference on Bioinformatics and Biomedicine pp 352ndash359 IEEEShenzhen China December 2016

[38] X Li P Zhang and D-W Song ldquoEEG based emotionidentification using unsupervised deep feature learningrdquo inProceedings of SIGIR2015 Workshop on Neuro-PhysiologicalMethods in IR Research August 2015

Computational Intelligence and Neuroscience 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 11: RecognitionofEmotionsUsingMultichannelEEGDataand DBN-GC …downloads.hindawi.com/journals/cin/2018/9750904.pdf · 2019-07-30 · types of deep learning approaches, stacked denoising

Intelligence pp 4746ndash4752 AAAI San Francisco CA USAFebruary 2017

[30] C Ikuta Y Uwate and Y Nishio ldquoMulti-layer perceptronwith positive and negative pulse glia chain for solving two-spirals problemrdquo in Proceedings of 2012 International JointConference on Neural Networks pp 1ndash6 IEEE BrisbaneAustralia June 2012

[31] Z-Q Geng and Y-K Zhang ldquoAn improved deep beliefnetwork inspired by glia chainsrdquo Acta Automatica Sinicavol 42 no 6 pp 943ndash952 2016

[32] S Koelstra C Muehl M Soleymani et al ldquoDEAP a databasefor emotion analysis using physiological signalsrdquo IEEETransaction on Affective Computing vol 3 no 1 pp 18ndash312012

[33] M Khezri M Firoozabadi and A R Sharafat ldquoReliableemotion recognition system based on dynamic adaptive fu-sion of forehead biopotentials and physiological signalsrdquoComputer Methods and Programs in Biomedicine vol 122no 2 pp 149ndash164 2015

[34] J Atkinson and D Campos ldquoImproving BCI-based emotionrecognition by combining EEG feature selection and kernelclassifiersrdquo Expert Systems with Applications vol 47pp 35ndash41 2016

[35] C Li C Xu and Z Feng ldquoAnalysis of physiological foremotion recognition with the IRS modelrdquo Neurocomputingvol 178 pp 103ndash111 2016

[36] Z Yin M-Y Zhao Y-X Wang J Yang and J ZhangldquoRecognition of emotions using multimodal physiologicalsignals and an ensemble deep learning modelrdquo ComputerMethods and Programs in Biomedicine vol 140 pp 93ndash1102017

[37] X Li D-W Song P Zhang et al ldquoEmotion recognition frommulti-channel EEG data through convolutional recurrentneural networkrdquo in Proceedings of IEEE International Con-ference on Bioinformatics and Biomedicine pp 352ndash359 IEEEShenzhen China December 2016

[38] X Li P Zhang and D-W Song ldquoEEG based emotionidentification using unsupervised deep feature learningrdquo inProceedings of SIGIR2015 Workshop on Neuro-PhysiologicalMethods in IR Research August 2015

Computational Intelligence and Neuroscience 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 12: RecognitionofEmotionsUsingMultichannelEEGDataand DBN-GC …downloads.hindawi.com/journals/cin/2018/9750904.pdf · 2019-07-30 · types of deep learning approaches, stacked denoising

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom