F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C...

20
Final Project Write-Up for CSCI 534: Affective Computing “Developing Group Emotion-Recognition Software to Investigate Group vs. Individual Emotion” - ChengYuan Cheng, Drishti Saxena, Prakhar Deep, Ritesh Sinha University of Southern California 5th Dec, 2018

Transcript of F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C...

Page 1: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

Final Project Write-Up for CSCI 534: Affective Computing

“Developing Group Emotion-Recognition Software to Investigate Group vs.

Individual Emotion”

- ChengYuan Cheng, Drishti Saxena, Prakhar Deep, Ritesh Sinha

University of Southern California

5th Dec, 2018

Page 2: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

1

TABLE OF CONTENTS

I. Introduction…………………………………………………………… 2

II. Related Approaches………………………………………………….. 2

III. Theoretical Perspectives……………………………………………… 3

IV. Developing the Software………………………………………………3

V. Empirical Test: The Experiment…………………………………….... 6

VI. Interesting Insights………………………………………………….... 12

VII. Next Steps, Implications & Conclusions………………………………13

VIII. Division of Labor………………………………………………………14

IX. References…………………………………………………………….. 15

X. Appendix……………………………………………………………… 16

Page 3: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

2

I. Introduction

Area Of Interest: Emotion recognition is currently one of the most researched areas in artificial intelligence. We aimed to build software to detect a person’s emotion based on facial expressions and then to use it to explore use-case “How group effect individual’s emotion”. Our general area of interest was emotion recognition and to use results to test our hypothesis that group elevates the individual’s emotional response. Problem Statement: To develop emotion recognition software for the comparative study of a person’s emotions when seeing some stimuli individually or in a group. “Individual vs Groups” – as there’s a saying that misery loves company, we tried to check our hypothesis that person’s emotion gets heightened if exposed to similar stimuli in a group as compared to that of in solitude. We build emotion recognition software to detect group/individual response from recorded video. We then showed similar intensity videos to both individual and group and checked the intensity of their emotions.

II. Related Research and Approaches:

● In 1991, Alan J. Fridlund conducted a similar experiment with 64 people to check how emotions vary when a person is alone or when a person is in a group. He explored four areas: when the candidate is (a) alone, (b) alone but with the belief that a friend nearby was otherwise engaged, (c) alone but with the belief that a friend was viewing the same videotape in another room, or (d) when a friend was present. He used electrodes to measure facial muscle movement to detect the emotion.[1]

● The Ripple Effect: Emotional Contagion and Its Influence on Group Behavior by Sigal G. Barsade University of Pennsylvania. In 2002, they performed the experiment on how one person can influence the group towards the targeted emotion. They conducted their experiment on 98 MBA students divided into a group of four. They placed a confederate in each group, who will lead discussion towards a targeted emotion. The analysis was made using self-reports, questionnaires filled before and after the experiment, and by observing the recorded video.[2]

● Individual and group-level factors for students' emotion management in online collaborative group work - Jianxia Du, Chuang Wang, Mingming Zhou, Jianzhong Xu, Xitao Fan & Saosan Lei (2017) – They examined the group trust and dynamics of people in online collaborative group work. Four hundred eleven university students from 103 groups in the United States responded to survey items on online collaboration, interactivity, communication media, and group trust. They used group chat and survey form for building results. Results revealed that trust among group members had a positive and powerful influence on online collaboration.[3]

gratch
Sticky Note
Interesting papers. good find
gratch
Sticky Note
not sure if this was actually referenced in the text
Page 4: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

3

III. Theoretical Perspectives

Our endeavor to investigate the difference in emotional experiences when responding

to similar stimuli individually and in groups is not new. Historically, the expression of emotion is a primarily personal experience and the individual phenomenon has been contested by supporters of the Behavioral Ecology view of expressions such as Alan Fridlund. This view was supported by considerable amounts of research. In a study by Fridlund (1991), it was demonstrated that when made to watch pleasant/ funny videos, participants’ smiling systematically increased as the “sociality” or extent to which participants believed they had company increased, as measured by facial electromyography. However, the sociality had no such effect on self-reported emotion. This finding thus corroborated Fridlund’s view that facial expressions are weakly connected to the true emotion felt and are not only shaped by the social context but also serve primarily as social information that is intended to be communicated implicitly or explicitly to another person.

As discussed in the lecture to the Affective Computing class on Oct 17, 2018, by Professor Gratch, there is thus an evident divide between such a view and Paul Ekman’s Basic Emotion Theory view which strongly contends that emotions may be masked by purposeful means, yet true emotion is reflected in one’s facial expressions and micro-expressions. His theory draws support from empirical evidence that showed the universality of emotion across cultures (e.g. Ekman (1972)) and that voluntarily making an expression can lead to the subjective experience of an emotion (Ekman, Levenson & Friesen, 1983). An alternative “Emotional Contagion” view states that the emotional experience of one individual can be mutually shared and distributed across the members of a group, which may lead the group to amplify the intensity of emotion felt in contrast to the experience they may have had alone (Barsade, 2002). Yet another perspective comes from a fairly recent study by Shteynberg et al. (2014) which corroborated the finding that it may actually be mere group attention as opposed to emotional contagion that may intensify the experience of an emotional experience in response to a specific stimulus. The study found that group attention augmented the intensity of fear and happiness felt when watching scary and positive advertisements respectively. This suggests that the mere collective experience of attending to a stimulus elicits intensified effect in groups. Such contrasting views on the affective experience of stimuli in groups led us to investigate it further by employing an automated recognition approach. We hoped to gain deeper insight by using advanced recognition techniques to uncover more sophisticated answers to the questions that such theoretical perspectives pose. Next, we explain how the software used in the experiment was developed.

IV. Developing the Software

● Emotion Recognition Microsoft provides Emotion APIs which takes video or image as an input and returns the confidence scores across a set of emotion for the faces detected in the frame. It can detect 8 different emotions and follows Ekman’s basic emotion model - i.e. happiness, sadness, surprise, anger, fear. Contempt, disgust or neutral.

gratch
Sticky Note
interesting. I was unaware of this paper. Essentially making a "embodied emotion" argument (smiling makes us happy). I didn't realize Ekman had made this argument. I'm not sure how this will relate to your hyp.
gratch
Sticky Note
To summarize, Fridlund would argue greater expression but not greater "emotion" in social context. Ekman doesn't make social v. nonsocial prediction but argues is you observe greater emotional expression, people must be feeling stronger emotion (or at least showing greater physio activation per ELF83 paper). Contagion finding suggests greater felt emotion in social context. These great points to make if you are setting up a contrast between expressed and "felt" emotion (i.e., looking at reltionship between expression and self-reported emotion as you vary social context. We will see if you follow up on this setup.
Page 5: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

4

● Subscriptions We need to get API keys for the Vision APIs. It is needed in order to integrate our code with the emotion APIs. For video frame analysis, the applicable APIs are: • Computer Vision API • Emotion API • Face API

● Interpreting Results The result returned by the Emotion API is in the JSON structure. For interpreting results from the Emotion API, the emotion detected should be interpreted as the emotion with the highest score, as scores are normalized to sum to one.

● Analyzing Videos We performed analysis on frames taken from a video stream. The basic components in the system are:

gratch
Sticky Note
One potential issue. It is possible that the accuracy of the soffware differs depending on if it is recognizing an individual or a group. A more systematic study (and time consuming) would be to take the group emotion videos, break into individual videos (best way would be to put a black mask over rest of scene so you control for #pixels), and verify the accuracy is unchanged.
Page 6: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

5

• Acquire frames from a video source • Select which frames to analyze • Submit these frames to the API • Consume each analysis result that is returned from the API call

● A Producer-Consumer Design We used "producer-consumer" system as suggested by the Microsoft Emotion documentation, we have a producer thread that puts the tasks (consuming analysis results) into a queue to keep track of them.

We have created consumer thread, which takes tasks off the queue and waits for them to finish. It can either display the result or raise the exception that was thrown. Usage of queue ensures that results get consumed one at a time, and in the correct order, without limiting the maximum frame-rate of the system.

Page 7: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

6

V. Empirical Test: The Experiment

Having developed the software, we then designed our experiment to gain further insights into our research question: How can our understanding of affective consequences improve by

Page 8: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

7

using emotion-recognition software to study the emotional expression of individuals alone and in groups when they attend to emotionally evocative stimuli? Based on the discrepant research findings, we decided to conduct an exploratory study through a structured experiment with the following two-tailed (bidirectional) hypotheses:

● Hypothesis H1: The fear-output scores from software analysis would be significantly different when participants watch the scary video in a group of peers than when they watch it alone.

● Hypothesis H2: The happiness-output scores from software analysis would be significantly different when participants watch the funny video in a group of peers than when they watch it alone.

Independent Variable: The presence of peers in the room as manipulated by leaving participants alone in the room in the individual condition or with 3 other peers in group condition. Dependent Variable: The fear-emotion output from the emotion-recognition software was considered for the scary video analyses and the happiness-emotion output was considered for the funny video analyses. Method

● Participants: The participants were sampled using convenience sampling, our friends and classmates formed an easily available pool of participants who were invited to participate in our study. They were all English-speaking graduate USC students (N = 11, 10 females and 1 male) between the age of 20 and 25 years. We initially had 12 participants (3 groups of 4 each in group condition), however, since one participant’s hand kept covering the face in all his videos, the software could not recognize his face and his expressions and hence his data was not used in our experiment.

● Materials: ○ Informed Consent ○ Laptop with an inbuilt camera for video-recording & to show participants the

selected videos ○ Self-Report Questionnaire & Pens ○ Phone Camera / Video-Recorder to record participants in group condition

video

● Source Video: Two videos of each emotion stimuli was chosen (total of 4 videos) to be used as the experiment source. The video ranges from 90 seconds to 180 seconds. To avoid bias, we choose videos with similar context and were used in past studies. For the fear stimulus, we used a clip from The Shining in the film elicitation

gratch
Sticky Note
To separate the 3 theories you introduced, you'd want to make hypotheses about self-report (no difference in self-report would be predicted by fridlund)
Page 9: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

8

experiment (Gross & Levenson, 1995) and a clip from Pihu with similar context.

For the happiness stimulus, with using two selected clips from the 2 Broke Girls to best recreate the natural scenario of watching a televised comedy series.

● Experimental Design & Procedure: We used two separate repeated measures experimental designs for our

experiment, one for the scary video and fear-output values and second for the funny video and happiness-output values.

The experiment was conducted in noise-proof rooms that were previously booked for the experiment. Participants were invited and they were given the informed consent form (see Appendix A) to apprise them of their rights, nature, and duration of the experiment. Their consent to be video-recorded was also obtained. They were compensated with free cookies.

Each participant was shown 4 videos totally. 2 videos were fear-inducing and the other 2 were laughter-inducing (pre-rated) as described above. The participants were shown one fear and one laughter video in the group condition and the other fear and laughter video in the individual condition, thus ultimately all participants ended up watching all four videos. In the individual condition, the video was played on the laptop and the experimenter left the room so the participant was alone in the room and recorded using the in-built camera of the laptop. In the group condition, the experimenter video-recorded the group watching the video on a separate phone camera. The order of conditions and videos was counterbalanced.

After completing these tasks, the participants were then asked to fill out the Self-Report Questionnaire (See Appendix B) and were then thanked for their participation. The fear and happiness output were independently used as the within-subjects factor for evaluating the difference in the group and individual

gratch
Sticky Note
good.
gratch
Sticky Note
so you have self report
Page 10: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

9

conditions in two separate repeated measures one-way ANOVA for statistical analyses.

Results

● Descriptive results The output of Microsoft Emotion contains a numerical intensity evaluation of Ekman’s six basic emotion categories: disgust, sadness, happiness, fear, anger, surprise along with contempt and neutral category. However, we are not interested in all eight of them. In fact, the presence of the "neutral" emotion is simply a mirror of the remaining emotions, so including it in our result might just skew the result too much to make the analysis difficult. According to the experimental hypothesis, our target features are the intensity of the "fear" and "happiness" values. After filtering out every other emotion and scaled the graphs so they align with each other, we get the descriptive results.

From directly looking at the results, we can tell that the group results, for both happiness and fear, seems to have a higher variance. We observe a more frequent rise and drop in intensity which the individual result doesn't show. If we compare the min/max value and the mean side-by-side, we can observe the following pattern.

Page 11: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

10

● Inferential Results

There were many possible ways to conduct the analysis with the data that the software generated. We conducted several many exploratory tests to see which ones would best answer our research question and test our hypotheses. The primary concern with the analysis of the data was that the software built using Microsoft Cognitive generated output for a range of emotions including surprise, sadness, happiness, disgust & neutral. Thus, to assess the emotional expressivity of the participants when watching the scary videos and funny videos, there were different data outputs we were to focus on. As a result of which, two separate one-way repeated measures ANOVA were conducted using the group/individual condition as the within-subjects factor.

We used IBM SPSS Analytics to perform the statistical analyses. After conducting a series of statistical tests with different outputs and covariates, we found that the most simplified way of reporting and understanding the results would be by using a 1 x 1 repeated measures ANOVA for the fear-output in the group and individual condition when participants watched the scary videos and run another 1 x1 repeated measures ANOVA for the happiness-output in the group and individual conditions when the participants watched the funny videos.

To compute an aggregate score for each participant, each video recording was clipped such that the most intense 30-seconds were passed through the emotion-recognition software and the output was given in the form of emotions for each of the 30 frames (each frame was 1-second long). To analyze the expressivity when watching scary video, the average output of the fear emotion for these 30 frames was computed and used as an aggregate score for the participant. Similarly, average output of the happiness emotion for the 30 frames was calculated and used as the “happiness-output” score for participants.

Table A: SPSS data view for statistical analyses of software-generated data

● Analysis of Scary Video Expressivity for Group vs. Individual Condition

As evidenced by the diagrams below (Image A), there was no statistically significant difference found between the fear-output scores [F (11) = 0.199, p > 0.05] in the group condition and in the individual condition when participants watched the scary videos. However, a difference in means is evident such that the fear-emotion in the group condition is elevated in comparison to the fear-emotion in the individual condition.

gratch
Sticky Note
Readers would appreaciate less detail and more standard description of results. I walked over this with Dr. Lucas. First, there is no such thing as a 1x1 anova. I believe what you are doing is analyzing fear and joy separately (she said that was fine) and then analyzing within-subjects manipulation of group v. individual. She would have preferred this be a 2x2 Anova where the order (group first v individual first) was one of the factors. Alternatively you could have separately showed that order did not matter and was ignored. Since at the end of the day you only have 2 cells left, a t-test would have been more conventional text but an ANOVA is ok.
Page 12: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

11

Image A: Fear-Output Results from SPSS

Image B: Happiness-Output Results from SPSS analysis

Page 13: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

12

● Analysis of Funny Video Expressivity for Group vs. Individual Condition

As evidenced by the diagrams above (Image B), there was no statistically significant difference found between the happiness-output scores [F (11) = 0.181, p > 0.05] in the group condition and in the individual condition when participants watched the funny videos. However, a difference in means is evident such that the happiness-emotion in the group condition is slightly more intensified in comparison to the happiness-emotion in the individual condition.

VI. Interesting Insights Here are a few interesting findings from our result that might spark some ideas both to improve our hypotheses and to come up with a future experiment: High intensity of "Neutral" in fear results: As previously mentioned, the "neutral" emotion is a mirror of the remaining emotions. There are limitations to a facial expression program, especially dealing with fear faces.

As the graph shows, although our target value is fear, the dominating emotion is always neutral. We believe this is due to the fact that fear itself is not a very "expressive" emotion, at least by facial expression alone. This is further support by the fact that we see a much higher fear level in self-report as compared to program output.

If we are to perform our experiment with other devices such as a heart rate sensor, we might get a better picture of how individuals express fear.

gratch
Sticky Note
WOuld have liked to see analysis of if group v. individual impacted self report. Essentially you would do the same analysis you did on measured emoitons, just targeting Fear and Joy for the respetive videos. Why didn't you graph self-report joy?
Page 14: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

13

Happiness in the group fear scenario: Another interesting observation is the sudden rise in "Happiness" only in the Group Fear case.

Although happiness is not our target variable, it is very interesting to see how happiness can occur in a situation which is designed to stimulate fear. After analyzing the video, we discovered that this is due to the fact of "relieve" after a tension of fear is build up. This is not observed in the individual scenario as people are unlikely to show this kind of relieve alone. What is causing this kind of behavior? Perhaps people influence each other to be more positive in this kind of scenario. We find this to be a possible follow up to our experiment.

VII. Next Steps, Implications & Conclusions

● As evidenced by the inferential results, none of our hypotheses were supported due to the lack of a statistically significant difference in the measured variables in the group and individual conditions for both, scary and funny video-watching.

● Although a small study whose findings may be attributed to a number of factors including lack of a representative sample (in terms of size, gender and culture), our findings still yield interesting and valuable insights into the field of using automated emotion-recognition to investigate group-emotion.

● In light of the theoretical perspectives discussed earlier, it seems as though implicit sociality as described by Fridlund (1991) may be at play in such a scenario, as during the experiment participants were in a room of a library with their friends waiting outside the room. It is possible that they implicitly perceived the presence of their peers, which in turn may have influenced their expressivity.

● Moreover, it could also be that group attention explains these results since the mere awareness of the participants of having their “group-mates” or peers somewhere around may have intensified the emotion slightly as seen in the observable difference in means between the two conditions.

● Future research could include even more sophisticated and holistic measurement of affective cues to study group emotion such as physiological measurement and can also look at the interesting difference between expression and emotional feeling.

● The self-report data from the experimental study supports the data generated by the software.

● Built software captures the intense or more expressive emotions such as happiness very accurately.

● Emotion API deducts a wide range of emotions. It includes Ekman’s six emotions plus one contempt emotion and one neutral emotion.

gratch
Sticky Note
most likely because of small sample size
Page 15: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

14

● Experimental result and self-reports validate each other. ● Sometimes during intense moments, candidate cover their faces. Software unable to

recognize such an intense moment due to face coverage. ● Video recorded from different angles gives a different value for emotions. ● In this experiment, we had only considered facial expressions to judge a person’s

emotion. However, some people are more expressive than others. Also, sometimes facial expressions come differently compared to actual emotion. For example, after a very scary moment, some candidates start smiling leading to high-value for happiness emotion. We can improve this using device to measure physiological aspects of the candidate.

● There are many biases involved in the experiment, which needs to be eliminated for a more accurate result.

○ Environment Bias: Candidate will respond differently with a group of friends than compared to with a group of unknown people.

○ Personal Bias: Candidate’s mindset right before experiment also affects the result. If someone is having a hectic day, he will not respond to experiment unbiased.

○ Video Intensity Bias: Intensity of the video shown in group/individual matters too. Also, how familiar candidate is with video will also affect the result.

In conclusion, we believe building the software and employing its use in conducting an experiment studying group emotion has proven to be a valuable learning experience in not only building affective software but also understanding its purpose and how it can contribute to sophisticated research and uncovering the deeper layers that surround the complex concept and experience of group emotion.

VIII. Division of Labor

● Ritesh Sinha – Software Development, Analysis of Videos for emotion recognition, Conducting Experiment, Experiment method, Theoretical Interpretation of Results

● Prakhar Deep – Software Development, Analysis of Videos for emotion recognition, Conducting Experiment, Experiment method, Theoretical Interpretation of Results

● Drishti Saxena – Experiment Design, Statistical Analysis of Output results, Conducting Experiment, Experiment method, Theoretical Interpretation of Results

● Cheng Yuan – Experiment Design, Descriptive Analysis of Output results, Conducting Experiment, Theoretical Interpretation of Results

gratch
Sticky Note
yup
Page 16: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

15

IX. References Fridlund, A. J. (1991). Sociality of solitary smiling: Potentiation by an implicit audience.

Journal of Personality and Social Psychology,60(2), 229-240. doi:10.1037//0022-3514.60.2.229

Barsade, S. G. (2002). The Ripple Effect: Emotional Contagion and Its Influence on Group

Behavior. Administrative Science Quarterly,47(4), 644. doi:10.2307/3094912 Du, J., Wang, C., Zhou, M., Xu, J., Fan, X., & Lei, S. (2017). Group trust, communication

media, and interactivity: Toward an integrated model of online collaborative learning. Interactive Learning Environments,26(2), 273-286. doi:10.1080/10494820.2017.1320565

Ekman, P. (1993). Facial expression and emotion. American psychologist, 48(4), 384. Ekman, P., Friesen, W. V., O'sullivan, M., Chan, A., Diacoyanni-Tarlatzis, I., Heider, K., ...

& Scherer, K. (1987). Universals and cultural differences in the judgments of facial expressions of emotion. Journal of personality and social psychology, 53(4), 712.

Ekman, P., Levenson, R. W., & Friesen, W. V. (1983). Autonomic nervous system activity

distinguishes among emotions. Science, 221(4616), 1208-1210. Gratch, J. (2018, October 17). Social Emotions. Lecture presented at CSCI534: Affective

Computing, Los Angeles. Gross, J. J., & Levenson, R. W. (1995). Emotion elicitation using films. Cognition &

emotion, 9(1), 87-108. Shteynberg, G., Hirsh, J. B., Apfelbaum, E. P., Larsen, J. T., Galinsky, A. D., & Roese, N. J.

(2014). Feeling more together: Group attention intensifies emotion. Emotion, 14(6), 1102.

Page 17: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

16

X. Appendices Appendix A: Informed Consent ----------------------------------------------------------------------------------------------------------------- Hello! Thank you for taking out time to be a part of our study. We are investigating the efficacy of an emotion recognition software that our team developed and are interested in gauging the emotional content of specific videos for our Affective Computing (CSCI 534) class at the University of Southern California (USC).

NATURE OF STUDY & PROCEDURE This study should take approximately 10 - 15 minutes to complete. You will be asked to watch two short videos in the study. During the period that you watch the video, you will be recorded for the purpose of evaluation of your reactions to the video. Please let us know if you do not wish to be recorded. After watching the videos, you will be asked to fill in a short questionnaire, and you are good to go!

RISKS & BENEFITS There are no direct risks or benefits involved in participating in the study. The videos may elicit some momentary emotional discomfort. You will although, be contributing to emotion recognition research that may yield potential benefits to society and thereby an increase in your self-esteem.

COMPENSATION You will receive free pizza as compensation for participating in the study.

PARTICIPANT RIGHTS & CONFIDENTIALITY Participating in this study is completely voluntary. You have the right to withdraw participation before, during and even after the task of the study is completed. Withdrawing participation will not affect your relationship with the university or researchers, or any compensation you were promised. Your responses are completely anonymous and your video recordings will not be used or distributed outside the study.

QUESTIONS & CONCERNS If you have any questions, concerns or wish to know the results of the study, please feel free to contact our researcher, Drishti Saxena at [email protected] or (650) 660-9772. If you have any questions, complaints or concerns regarding your rights or your participation in the study, you may contact the Institutional Review Board (IRB) of USC at [email protected] or (323) 442-0114.

I have read and understood the nature of the study as well as my rights of participation. I consent to voluntary participation in the study (which entails being camera-recorded).

Name: _____________ Signature: _____________ Date: __ /__ /____

Page 18: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

17

----------------------------------------------------------------------------------------------------------------- Appendix B: Experiment Self-Report Questionnaire ------------------------------------------------------------------------------------ Tick the chosen circle for Questions 1 – 4! Please answer Questions 1 & 2 with regards to the 1st video you watched: Q1 I thought the content of the video 1 was meant to induce

Not at All (1)

A Little Bit (2)

Moderate (3) Quite a Bit (4)

A lot (5)

Fear o o o o o

Laughter o o o o o

Sadness o o o o o

Surprise o o o o o

Anger o o o o o

Q2 I felt ___________ during the time I watched video 1

Not at All (1)

A Little Bit (2)

Moderate (3) Quite a Bit (4)

A lot (5)

Scared o o o o o

Happy o o o o o

Sad o o o o o

Page 19: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

18

Surprised o o o o o

Angry o o o o o

Please answer Questions 3 & 4 with regards to the 2nd video you watched: (Turn Over Page) Q3 I thought the content of the video 2 was meant to induce

Not at All (1)

A Little Bit (2)

Moderate (3) Quite a Bit (4)

A lot (5)

Fear o o o o o

Laughter o o o o o

Sadness o o o o o

Surprise o o o o o

Anger o o o o o

Q4 I felt ___________ during the time I watched video 2

Not at All (1)

A Little Bit (2)

Moderate (3) Quite a Bit (4)

A lot (5)

Scared o o o o o

Happy o o o o o

Page 20: F i n al P r oje c t Wr i te -U p for C S C I 534: A ffe c ti ve C ...gratch/CSCI534/past-projects...emotion”. Our general area of interest was emotion recognition and to use results

19

Sad o o o o o

Surprised o o o o o

Angry o o o o o

Q5 What do you think the aim of the experiment was?

________________________________________________________________ ________________________________________________________________

------------------------------------------------------------------------------------ **Note: The participants filled the above questionnaire twice - once for the group condition and one for the individual condition