FEBEI - Face Expression Based Emoticon Identification CS ...vision.soic.indiana.edu › b657 ›...

6
FEBEI - Face Expression Based Emoticon Identification CS - B657 Computer Vision Nethra Chandrasekaran Sashikar - necsashi Prashanth Kumar Murali - prmurali Robert J Henderson - rojahend Abstract The Face Expression Based Emoticon Identification (FEBEI) system is an open source extension to the Tracker.js framework which converts a human facial ex- pression to the best matching emoticon. The contribu- tion of this project was to build this robust classifier which can identify facial expression in real time with- out any reliance on an external server or computation node. An entirely client-side JavaScript implementation has clear privacy benefits as well as the avoidance of any lag inherent in uploading and downloading images. We accomplished this by utilizing several computationally efficient methods. Tracking.js provided a Viola Jones based face detector which we used to pass facial im- ages to our own implementation of an eigenemotion detection system which was trained to distinguish be- tween happy and angry faces. We have implemented a similar eigenface classifier in python and have trained a Convoluted Neural Network (CNN) to classify emo- tions to provide a comparative view of its advantages. We aim to make FEBEI easily extendable so that the developer community will be able to add classifiers for more emoticon. Introduction Emoticons have become an essential part on today’s digital communication. Emoticons (also known as Emojis) are used to express the emotions of a person through text in a way that is not possible with just words. The importance of emojis has become so huge that they have even been annotated in a few WordNets as well. Computer Vision has grown to a large scale and every commercial product is using the benefits of high speed image processing and machine learning to interpret meaningful details from the visual world. One of the flourishing directions in computer vision is the identification of human faces and face expression detection. Many social and customer based applications like snapchat and Facebook have started using face expression techniques and are investing heavily in its research. One major component to consider while using customer images and data is privacy. For every action in this area one needs to upload data to a server which can be a bit Copyright c 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. of concern as image data is sensitive. FEBEI is a real time face expression detection system which runs on a browser with the help of JavaScript. This system thereby eliminates the need for a server and the data is not uploaded to any server. Hence, effectively it reduces the time taken to classify as the network time is avoided and the privacy of data is ensured. One of the major challenges faced while building such a system is that the system should work in real time without any lag on a browser and not overburden the processor. For this reason, we cannot build a neural network with a million parameters but have instead chosen a more lightweight classifier which will perform a more efficient classification. Background & Related Work The key tasks at hand are Face Detection and Face Expres- sion Recognition. For the former task we have various tech- niques like fisherfaces, eigenfaces, viola jones object detec- tion framework, hausdorff distance etc. A survey paper by Vinay Bettadapura[8] provides an excellent overview of the state of computer vision research as it relates to emotion de- tection. Many of the primary sources mentioned in the fol- lowing sections were brought to our attention through this paper. Facial Action Coding System (FACS) Ekman and Friesen[1] observed the facial muscles which are important in expressing emotion and compiled their findings to a system of 46 action units (AUs). These action units, some of which are raising of the inner eyebrow and raising of the outer eyebrow, were immensely important in quantifying human expressions. Before this system was published, facial expression research relied heavily on hu- man labeling of example expressions and many researchers were concerned about bias related to cultural context or the labeler’s emotional state at the time. The advent of Ekman and Feisen’s facial action coding system in 1977 put these concerns to rest and quickly became the golden standard. Facial action units are closely tied to the musculature of the face and can therefore be combined in ways that are either independent of each other or which interact

Transcript of FEBEI - Face Expression Based Emoticon Identification CS ...vision.soic.indiana.edu › b657 ›...

Page 1: FEBEI - Face Expression Based Emoticon Identification CS ...vision.soic.indiana.edu › b657 › sp2016 › projects › prmurali › paper.pdfFEBEI - Face Expression Based Emoticon

FEBEI - Face Expression Based Emoticon IdentificationCS - B657 Computer Vision

Nethra Chandrasekaran Sashikar - necsashiPrashanth Kumar Murali - prmurali

Robert J Henderson - rojahend

Abstract

The Face Expression Based Emoticon Identification(FEBEI) system is an open source extension to theTracker.js framework which converts a human facial ex-pression to the best matching emoticon. The contribu-tion of this project was to build this robust classifierwhich can identify facial expression in real time with-out any reliance on an external server or computationnode. An entirely client-side JavaScript implementationhas clear privacy benefits as well as the avoidance of anylag inherent in uploading and downloading images. Weaccomplished this by utilizing several computationallyefficient methods. Tracking.js provided a Viola Jonesbased face detector which we used to pass facial im-ages to our own implementation of an eigenemotiondetection system which was trained to distinguish be-tween happy and angry faces. We have implemented asimilar eigenface classifier in python and have traineda Convoluted Neural Network (CNN) to classify emo-tions to provide a comparative view of its advantages.We aim to make FEBEI easily extendable so that thedeveloper community will be able to add classifiers formore emoticon.

IntroductionEmoticons have become an essential part on today’s digitalcommunication. Emoticons (also known as Emojis) areused to express the emotions of a person through text in away that is not possible with just words. The importanceof emojis has become so huge that they have even beenannotated in a few WordNets as well. Computer Visionhas grown to a large scale and every commercial productis using the benefits of high speed image processing andmachine learning to interpret meaningful details fromthe visual world. One of the flourishing directions incomputer vision is the identification of human faces andface expression detection. Many social and customer basedapplications like snapchat and Facebook have started usingface expression techniques and are investing heavily in itsresearch. One major component to consider while usingcustomer images and data is privacy. For every action in thisarea one needs to upload data to a server which can be a bit

Copyright c© 2015, Association for the Advancement of ArtificialIntelligence (www.aaai.org). All rights reserved.

of concern as image data is sensitive.

FEBEI is a real time face expression detection systemwhich runs on a browser with the help of JavaScript. Thissystem thereby eliminates the need for a server and the datais not uploaded to any server. Hence, effectively it reducesthe time taken to classify as the network time is avoided andthe privacy of data is ensured. One of the major challengesfaced while building such a system is that the system shouldwork in real time without any lag on a browser and notoverburden the processor. For this reason, we cannot builda neural network with a million parameters but have insteadchosen a more lightweight classifier which will perform amore efficient classification.

Background & Related WorkThe key tasks at hand are Face Detection and Face Expres-sion Recognition. For the former task we have various tech-niques like fisherfaces, eigenfaces, viola jones object detec-tion framework, hausdorff distance etc. A survey paper byVinay Bettadapura[8] provides an excellent overview of thestate of computer vision research as it relates to emotion de-tection. Many of the primary sources mentioned in the fol-lowing sections were brought to our attention through thispaper.

Facial Action Coding System (FACS)Ekman and Friesen[1] observed the facial muscles whichare important in expressing emotion and compiled theirfindings to a system of 46 action units (AUs). These actionunits, some of which are raising of the inner eyebrow andraising of the outer eyebrow, were immensely importantin quantifying human expressions. Before this system waspublished, facial expression research relied heavily on hu-man labeling of example expressions and many researcherswere concerned about bias related to cultural context or thelabeler’s emotional state at the time. The advent of Ekmanand Feisen’s facial action coding system in 1977 put theseconcerns to rest and quickly became the golden standard.

Facial action units are closely tied to the musculatureof the face and can therefore be combined in ways thatare either independent of each other or which interact

Page 2: FEBEI - Face Expression Based Emoticon Identification CS ...vision.soic.indiana.edu › b657 › sp2016 › projects › prmurali › paper.pdfFEBEI - Face Expression Based Emoticon

to form inherently different appearances. In this way, afacial expression such as fear can be quantitatively de-scribed as the combination of facial units 1, 2, and 26[2].It’s important to note the automatically detecting facialAUs is a challenging task, especially because some AUsare very subtle or only visible from certain angles. AU#29 for example describes jaw movement that isn’t ap-parent from the front and requires a profile view to detect.[3]

Facial Action Parameters (FAPs)Ekman’s Facial action coding system was found to beextremely comprehensive as researchers were able toidentify over 7000 combinations of the 46 atomic AUs[4] but it’s important to note that real-life expressionsare dynamic and there’s more to them than still imagesof contracted muscles. Even after the FACS was adoptedwithin the computer vision community, computer animationresearchers were struggling to agree upon a system forrepresenting a face in motion. The Motion Pictures ExpertGroup (MPEG) introduced a well thought out set of facialanimation parameters (FAPs) which became standard in1999.

This system describes a neutral face and a set of 84 keyfacial feature points. The FAPs are a set of 68 parametersdescribing the magnitude of movement of these facialfeature points resulting from an action unit that is veryclosely related to those defined by FACS and researchers[6,7] have had some success in mapping the AUs of FACSto FAPs.

Viola-Jones and Haar-like Features Applied toEmotionsIn 2004, Paul Viola and Michael Jones developed anextremely efficient face detector with high performance byusing an adaboost learning algorithm to classify featuresderived from Haar-like features. [9] Wang et al appliedthis method to facial expressions and was able to sortfaces into one of 7 archetypal facial expression with 92.4%accuracy. [10] An investigation by Whitehill and Omlin[11] found that the use of Haar features in combination withthe Adaboost boosting algorithm was at least two ordersof magnitude faster than the more standard classificationusing SVMs and Gabor filters without a significant drop inperformance. More recently, work done by Happy et al [12]in 2015 found that the viola-jones algorithm was valuablenot in direct detection of emotions but as a pre-processingstep to efficiently detect the most important face regions(IE lip corners and eyebrow edges) which where thenfurther processed with local binary pattern histograms.Their solution performed similarly to other state-of-the-artmethods but required far less computational time.

Emotion Detection from Frontal Facial Image[14]By this approach, segmentation of faces from the rest of theimage is done by a skin color model to avoid noise. Facial

parts like eye and lip region are extracted from the imageby viola-jones algorithm. Emotion detection is then done byanalyzing the position of the mouth and eyes in each of thetest images.

Novel approach to face expression analysis indetermining emotional valence[16]By this technique image acquisition is done from video.Image is segmented into points of interest such as nose,eyes, mouth. Texture analysis is done based on cheek andforehead activity. Facial characteristic points are changes inthe gradient, which is the vector sum of color differencesin the neighborhood of a pixel. Texture is the bright-ness difference, color difference, shape and smoothingfactors. Based on the texture and features, specific scoresare obtained, we classify the emotion as positive or negative.

Real Time Classification of Evoked Emotion[15]Real time classification of evoked emotions utilizes face,cardio activity, skin conductance and somatic activity.Features are extracted from each of the images. Facialfeatures are taken by feeding the image to Neven Visionalong with their cardio activity. Relevant features aretaken from the output by using Waikato Environment forKnowledge Analysis (WEKA). Finally, classification isdone using two different techniques, Neural net and Linearregression. A comparative study is done with the resultsfrom two techniques.

The algorithm that we pick needed to give robust resultsin real time and Viola-Jones object detection frameworkbest fit this requirement while detecting faces. Our initialefforts were in the view that we can use the same Viola-Jones technique to classify facial expressions, but on furtherresearch we found that Viola-Jones did not do a good jobon classifying emotions as it did with just detecting thefaces. We then proceeded with other methods which can dothis real time and came to a conclusion that it is possiblewith Eigenfaces. The following paper provided us withconsiderable insight and confidence with this technique.

Eigenface Based Recognition of Emotion VariantFaces[17]In this process we create a face library and preprocessingis done to get the frontal faces. Eigenfaces are computedfor all the training images and classification of test image isdone by the following steps.

1. Generate vector of all the images and matrix of vector iscreated for each image type.

2. Mean of the training faces is created3. Subtract the test image with each of the mean image.4. Co-variance matrix is calculated for each of the difference

vectors created

Page 3: FEBEI - Face Expression Based Emoticon Identification CS ...vision.soic.indiana.edu › b657 › sp2016 › projects › prmurali › paper.pdfFEBEI - Face Expression Based Emoticon

5. Eigenvalue and Eigenvectors are computed from each ofthe co-variance matrix

6. We classify the test image to train image type for whicheigen value is maximum

Model and ExperimentsDatasetsThe initial hurdle that we had to overcome in our projectwas to find a large and well labeled image data set forfacial expressions.Though the internet is flooded with alarge number of face data sets, we could not find welllabeled expression data sets. Data set for this project werebuilt from the Cohn-Kanade AU-Coded Facial ExpressionDatabase[18] by University of Pittsburgh. We have col-lected images for two facial expressions happy and sad.We have built our own data set for tongue out as none ofthe image data set has tongue out expression as it is veryspecific to social media and chat rooms. We were not ableto collect many facial expression images from any otherresources since each of the image data set available wereall proprietary. All the images used in this data set are greyscaled image and re-sized to a dimension of 256×256.

Classification with Viola Jones and Haar FeatureFace expression detection primarily involves detectingfaces in the image frame and applying expression detectionalgorithm on faces. Our initial approach involved detectingfaces with viola jones classifier[20] which is a real timeobject detection technique.

Viola jones classifier strictly need frontal face images forface detection. We detect the face expressions from identi-fied face frames with facial action units which are computedby haar technique. FACS detected are then applied withADA boosting and classified with cascading classifier. Withthis technique we could not achieve significant results as itdid not provide fine enough detail to distinguish betweenthe subtle differences between the same face making happyor angry faces.

Eigen EmotionsWe initially started to search for datasets to use in thisclassification. To our dismay there wasn’t any reliabledataset free to download. We tried to obtain the Ohio stateAR dataset and the CMU face images dataset, but both ofthese were restricted to only students of those universities.We then proceeded to collect other datasets like LFW fromUniversity of Massachussets and Cohn-Kanade dataset byUniversity of Pittsburg. We formed a compound datasetfrom these to form 2 types of faces: happy and angry.We have collected around 300 images for each of theseemotions.

The eigenfaces we formed are with the face recognitioncode of scikit learn in python. The face recognition codeis based on the lfw dataset with the lfw.py which handlesfinding the faces from the dataset. We have added a newdataset in scikit learn called face expressions. We haveadded a new handler for this in sklearn datasets calledface expressions.py. We have hosted the pairs of imagenames to test and train as a text file and the actual imagedataset in localhost and the code downloads it from there. Itprocesses the set of images and forms the EigenEmotions.This is then used to classify on the pairs of test images. Weintend to extract the model generated by these EigenEmo-tions and use them to classify real time in a browser.

For getting this in a browser we have decided to usea framework called tracking.js. Tracking.js is a computervision framework built on Javascript intended to classifyvision based problem on a web browser. It has modulesfor integrating a video canvas or an image canvas on thebrowser and extracting details from it. The creators of track-ing.js has added a module for classifying images based ona viola jones classification model. There can be no trainingdone on tracking.js. We plan to add an additional module forclassifying based on EigenEmotions. The EigenEmotionsmodel will be extracted from scikit learn and included inthe tracking.js classes. The face will be extracted from thevideo stream using the default viola jones classificationmodel in tracking.js and the face region fetched from it willbe tested by the EigenEmotions classifier and a emotion willbe detected. We will have a hash of emoticons associatedwith each emoticon and this will be used to identify theemoticons and sent to the view.

Tracking.js

Tracking.js[19]is an opensource javascript framework withseveral computer vision algorithms implemented usingHTML specifications and Javascript. With the arrival oftracking.js computer vision algorithms are now applied onreal time images which are fed through the computer’s webcam. It allows us to do detection of colors, faces and facialfeatures in real time within the browser environment. Track-ing.js is very responsive as it is built in a very lightcore andinteractive interface. Tracking.js processes images at a highspeed since all the processing happens in the client side andno lag is introduced from uploading or downloading images.

Tracking.js offers wide range of computer vision algo-rithms. For example, the color detection function can takevideo from live feed and recognize the desired color andthe face detection functionality detects multiple frontalfaces. All these algorithms take input from video cam whichis included with HTML tags and associating a detectionalgorithm with it. Tracking.js is available for Chrome,Firefox, IE, Opera and Safari.

Page 4: FEBEI - Face Expression Based Emoticon Identification CS ...vision.soic.indiana.edu › b657 › sp2016 › projects › prmurali › paper.pdfFEBEI - Face Expression Based Emoticon

Proposed ModelOur model is an amalgamation of the above experimentsand it unfolds in a cascaded manner. The EigenEmotionsare trained in Python using the new dataset that was created.The top 150 vectors are chosen and a priliminary classifica-tion is produced in python. The PCA component, Mean Im-age and the fitted training set are exported and placed insidejavascript models. A linear SVM is then trained in pythonusing the fitted training set and the test set is classified us-ing that and results are produced. The trained linear SVMmodel and the PCA model are exported in a pickle formatfrom python.

There is a separate SVM training phase in javascript. TheSVM is a modification from svm.js[23]. The fitted train-ing set exported from python is loaded and a linear SVMis trained with a maximum of 100000 iterations. The trainedjavascript SVM model is exported into another javascript ob-ject.

In real time the javascript uses tracking.js, which is in-voked on a live video element. The face classifier on thetracking.js is used to find the face region. The face classifi-cation system uses Viola Jones classification system to finsthe face region and produces a boxed face region detection.This region is then reshaped to the appropriate size that wasclassified in python. This slice is 50 X 37 pixels. The pcacomponents and mean image exported from python are thenloaded. The image data is extracted from the resized videoframe and the mean image is subtracted from it. The resul-tant image is normalized and fitted with the PCA compo-nents. This is then passed to the pre-trained javascript SVMmodel and a result is obtained. This is then mapped with ahappy or an angry emoticon according to the label emitted.

In order to compare this result with the ones in python, aflask application server[24] is created in python. The pick-led SVM model and PCA model are unpickled, loaded andreaded to classify incoming images. The javascript performsan ajax POST request to local flask server with the pixel dataof the resized, grayscaled image that is obtained from thelive feed. The python server classifies this image based onthe stored models and returns a class label. The javascriptuses that class label to display either a happy or an angryemoticon.

In the commercial model the Python evaluation will behidden and only the JS evaluations are given out. The Pythonserver evaluations are purely for a educative purpose. Thejavascript classification yielded much better results and isthe main motive of this project.

Results & EvaluationWe have a robust static web application with a javascriptwhich can classify emotions. Currently FEBEI can classify2 emotions, angry and happy. It also maps these emotionsto an emoticon and displays it on screen. We can extract thelabels from the javascript and use it for other purposes too.

As expected, the javascript implementation of ouremotion classification performed more quickly than thepython implementation hosted on a flask-based web

Figure 1: Accurate classification of an angry face

Figure 2: Accurate classification of a happy face

app. FEBEI was able to classify images at a rate of 11frames per second which is more than fast enough forour purposes. When the python plugin was used as a clas-sifier, this rate dropped to an average of 7 frames per second.

FEBEI consistently performed better than randomguessing when evaluated based on our test set. Our JSexpression classifier got an average of 70% of the emotionclassifications correct with slightly better performance inrecognizing angry faces than happy faces. It’s importantto note that the quantitative performance evaluation wasconducted using static images. Getting a quantitativemeasure of accuracy in real time was difficult but, quali-tatatively, FEBEI worked surprisingly well! Figures 1 and2 showcase accurate classifications by both FEBIE and ourpython implementation. Qualitative evidence suggests thatthe JavaScript impelementation outperforms the pythonversion. Even in slightly ambiguous situations where thepython model failed, FEBEI was successful (see figure 3 foran example)

Page 5: FEBEI - Face Expression Based Emoticon Identification CS ...vision.soic.indiana.edu › b657 › sp2016 › projects › prmurali › paper.pdfFEBEI - Face Expression Based Emoticon

Figure 3: Accurate classification by FEBEI in contrast to aninaccurate classification by our python implementation

Figure 4: The Eigenfaces generated from the training set forAngry and Happy faces

The general Python code was evaluated with a one thirdspart of the training set the results produced were as below.The Python SVM class produced a 1.0 precision withcorrect classification of all images. Figure 4 shows theEigenEmotions generated from the training images. Figure5 shows the classification of the test set. Figure 6 shows theconfusion matrix that was produced in python.

In order to evaluate this method with another algo-rithm we went on to classify the data using the OverFeatCNN[22]. Overfeat yielded a 85% accuracy when comparedto the 100% in the EigenEmotions. Figure 7 shows theclassification matrix from Overfeat.

Figure 5: The image classifications for the test set

Figure 6: The image classifications for the test set

Conclusion and ImprovementsWe have effectively created a system which can detectemoticons from facial expressions using only JavaScriptin a browser or python hosted on a web server. We haveextended tracking.js framework with a custom librarywhich can detect emotions and map them to the appro-priate emoticon. The library provided has easy access tochange the classifier parameters and be retrained to evenperform a different task. The effectiveness of this systemwas evaluated against a few other techniques and it wasdemonstrated that even though there is possibility of betterclassification in them, this was the best performance thatcould be achieved given the time constraint.

There are several ways to improve this system. For a startwe can include many more emotions and emoticons. TheSVM used can be tuned with more appropriate parametersto provide better results. The training data set can be morediverse and training can be done on a color image data setto identify more significant vectors. The JavaScript canbe made more elaborate with additions to plugin custommethods on detection of expression.

We feel that robust JavaScript frameworks are the futureof commercial software and this system will pace the wayfor a revolution in the utilization of JavaScript for computervision. We will continue to work on this system and makeimprovements to it. We are eager to invite more enthusiasticdevelopers to work on this system and make it even more

Page 6: FEBEI - Face Expression Based Emoticon Identification CS ...vision.soic.indiana.edu › b657 › sp2016 › projects › prmurali › paper.pdfFEBEI - Face Expression Based Emoticon

Figure 7: The image classifications for the test set

effective. For this purpose we are making the code anddata set open source. Our deliverable can be found at thefollowing URL.

https://febei.neocities.org/face_exp.html

References1. G. Donato, M. S. Bartlett, J. C. Hager, P. Ekman, and T. J.

Sejnowski, Classifying facial actions, IEEE Trans. PatternAnal. Mach. Intell., vol. 21, no. 10, pp. 974989, Oct. 1999.

2. J.F. Cohn, Z. Ambadar and P. Ekman, Observer-BasedMeasurement of Facial Expression with the Facial Ac-tion Coding System, in J.A. Coan and J.B. Allen (Eds.),The handbook of emotion elicitation and assessment, Ox-ford University Press Series in Affective Science, Oxford,2005.

3. M. Pantic and I. Patras, Dynamics of Facial Expression:Recognition of Facial Actions and Their Temporal Seg-ments Form Face Profile Image Sequences, IEEE Trans.Systems, Man, and Cybernetics Part B, vol. 36, no. 2, pp.433-449, 2006.

4. J. Harrigan, R. Rosenthal, K. Scherer, New Handbook ofMethods in Nonverbal Behavior Research, Oxford Uni-versity Press, 2008.

5. I.S. Pandzic and R. Forchheimer (editors), MPEG-4 Fa-cial Animation: The Standard, Implementation and Ap-plications, John Wiley and sons, 2002.

6. L. Malatesta, A. Raouzaiou, K. Karpouzis and S. Kol-lias, Towards Modeling Embodied Conversational AgentCharacter Profiles Using Appraisal Theory Predictionsin Expression Synthesis, Applied Intelligence, SpringerNetherlands, July 2007.

7. A. Raouzaiou, N. Tsapatsoulis, K. Karpouzis, and S. Kol-lias, Parameterized Facial Expression Synthesis Based onMPEG-4, EURASIP Journal on Applied Signal Process-ing, vol. 2002, no. 10, pp. 1021-1038, 2002.

8. Bettadapura, Vinay. ”Face expression recognitionand analysis: the state of the art.” arXiv preprintarXiv:1203.6722 (2012).

9. P. Viola and M.J. Jones, Robust real-time object detection,Int. Journal of Computer Vision, vol. 57, no. 2, pp. 137-154, Dec. 2004.

10. Y. WANG, H. AI, B. WU, and C. HUANG. Real time fa-cial expression recognition with adaboost. In Proceedingsof the 17th International Conference on Pattern Recogni-tion (ICPR 2004), 2004.

11. Whitehill, Jacob, and Christian W. Omlin. ”Haar featuresfor facs au recognition.” Automatic Face and Gesture

Recognition, 2006. FGR 2006. 7th International Confer-ence on. IEEE, 2006.

12. Happy, S. L., and Aurobinda Routray. ”Automatic fa-cial expression recognition using features of salient facialpatches.” Affective Computing, IEEE Transactions on 6.1(2015): 1-12.

13. P. Ekman and W.V. Friesen, Constants across cultures inthe face and emotions, J. Personality Social Psychology,vol. 17, no. 2, pp. 124-129, 1971

14. Hussain, Sakib. Emotion detection from frontal facial im-age. Diss. BRAC University, 2013

15. Bailenson, Jeremy N., et al. ”Real-time classificationof evoked emotions using facial feature tracking andphysiological responses.” International journal of human-computer studies 66.5 (2008): 303-317.

16. Dinculescu, Adrian, et al. ”Novel approach to face ex-pression analysis in determining emotional valence andintensity with benefit for human space flight studies.”E-Health and Bioengineering Conference (EHB), 2015.IEEE, 2015.

17. Thuseethan, S., and S. Kuhanesan. ”Eigenface BasedRecognition of Emotion Variant Faces.” Available atSSRN 2752808 (2016).

18. Kanade, T., Cohn, J. F., & Tian, Y. (2000). Comprehen-sive database for facial expression analysis. In AutomaticFace and Gesture Recognition, 2000. Proceedings. FourthIEEE International Conference on (pp. 46-53). IEEE.

19. https://trackingjs.com/20. Whitehill, J., & Omlin, C. W. (2006, April). Haar features

for facs au recognition. In Automatic Face and GestureRecognition, 2006. FGR 2006. 7th International Confer-ence on (pp. 5-pp). IEEE.

21. Thuseethan, S., & Kuhanesan, S. (2016). Eigenface BasedRecognition of Emotion Variant Faces. Available at SSRN2752808.

22. https://github.com/sermanet/OverFeat23. https://github.com/karpathy/svmjs24. http://flask.pocoo.org/