Facial emotion recognition and visual search strategies of children with high functioning autism and...

12

Click here to load reader

Transcript of Facial emotion recognition and visual search strategies of children with high functioning autism and...

Page 1: Facial emotion recognition and visual search strategies of children with high functioning autism and Asperger syndrome

Research in Autism Spectrum Disorders 7 (2013) 833–844

Contents lists available at SciVerse ScienceDirect

Research in Autism Spectrum Disorders

Jo ur n al h o mep ag e: ht tp : / /ees .e ls evier .c o m/RA SD/d efau l t .asp

Facial emotion recognition and visual search strategies of

children with high functioning autism and Asperger syndrome

Denise Leung a, Anna Ordqvist b, Torbjorn Falkmer a,b,c,d,*,Richard Parsons a, Marita Falkmer e

a School of Occupational Therapy & Social Work, Curtin Health Innovation Research Institute, Curtin University, Perth, WA, Australiab Rehabilitation Medicine, Department of Medicine and Health Sciences (IMH), Faculty of Health Sciences, Linkoping University & Pain and

Rehabilitation Centre, UHL, County Council, Linkoping, Swedenc School of Health Sciences, Jonkoping University, Swedend School of Occupational Therapy, La Trobe University, Melbourne, Vic., Australiae School of Education and Communication, CHILD Programme, Institute of Disability Research, Jonkoping University, Sweden

A R T I C L E I N F O

Article history:

Received 20 November 2012

Received in revised form 25 March 2013

Accepted 26 March 2013

Keywords:

Emotion recognition

Eye tracking

Fixations

Fixation durations

Socialisation

A B S T R A C T

Adults with high functioning autism (HFA) and Asperger syndrome (AS) are often less able

to identify facially expressed emotions than their matched controls. However, results

regarding emotion recognition abilities in children with HFA/AS remain equivocal.

Emotion recognition ability and visual search strategies of 26 children with HFA/AS and

matched controls were compared. An eye tracker measured the number of fixations and

fixation durations as participants were shown 12 pairs of slides, displaying photos of faces

expressing anger, happiness or surprise. The first slide of each pair showed a face broken

up into puzzle pieces. The eyes in half of the puzzle piece slides were bisected, while those

in the remaining half were whole. Participants then identified which of three alternative

faces was expressing the same emotion shown in the preceding puzzle piece slide. No

differences between the participant groups were found for either emotion recognition

ability or number of fixations. Both groups fixated more often on the eyes and performed

better when the eyes were whole, suggesting that both children with HFA/AS and controls

consider the eyes to be the most important source of information during emotion

recognition. Fixation durations were longer in the group with HFA/AS, which indicates that

while children with HFA/AS may be able to accurately recognise emotions, they find the

task more demanding.

� 2013 Elsevier Ltd. All rights reserved.

A major role for the human cognitive system is to distinguish significant social information and to adapt behaviour inaccordance with the interpretation of this information (Grossmann & Johnson, 2007). By reading, interpreting andunderstanding facial expressions, adequate actions based on other people’s mental states can be taken (Back, Ropar, &Mitchell, 2007; Baron-Cohen, Wheelwright, & Jolliffe, 1997). As such, facial expressions contribute to an individual’s socialdevelopment and ability to interact with others (Back et al., 2007; Baron-Cohen et al., 1997). Accurate emotion recognitionunderpins emotion regulation and contributes to social competence (Green & Baker, 2011; Izard et al., 2001). For example,poor emotion recognition and regulation in school-aged children has been linked to difficulties in relating to peers, which

* Corresponding author at: School of Occupational Therapy & Social Work, Faculty of Health Sciences, Curtin University, GPO Box U1987, Perth, WA 6845,

Australia. Tel.: +61 8 9266 9051; fax: +61 8 9266 3636.

E-mail address: [email protected] (T. Falkmer).

1750-9467/$ – see front matter � 2013 Elsevier Ltd. All rights reserved.

http://dx.doi.org/10.1016/j.rasd.2013.03.009

Page 2: Facial emotion recognition and visual search strategies of children with high functioning autism and Asperger syndrome

D. Leung et al. / Research in Autism Spectrum Disorders 7 (2013) 833–844834

places the individual at increased risk of rejection and social isolation, and decreases the likelihood of developingrelationships and participating in social activities (Goodfellow & Nowicki, 2009; Izard et al., 2001; Stichter et al., 2010;Turkstra, Williams, Tonks, & Frampton, 2008).

The processing of facial information can be based on recognition of individual facial features or on a more holistic process(Tanaka & Farah, 1993; Taylor, Edmonds, McCarthy, & Allison, 2001; Taylor, Batty, & Itier, 2004). Featural processing can becontrasted to both holistic and configural processing (Taylor et al., 2004). In holistic processing, the face is perceived as a unit(gestalt), whilst processing of the relationship between facial features is configural (Taylor et al., 2004). Both holistic andconfigural processing are part of face processing from early childhood. However, the configural processing ability continuesto develop qualitatively during most of the childhood years (Mondloch, Le Grand, & Maurer, 2002a; Taylor et al., 2004).Hence, the ability to recognise emotions develops with age, but infants from as young as four months are able to gauge basicemotions, such as anger, happiness and surprise, from facial expressions (Bornstein & Arterberry, 2003; Vicari, Reilly,Pasqualetti, Vizzotto, & Caltagirone, 2000). People’s emotion recognition ability becomes increasingly accurate and fasterthroughout late childhood and adolescence, before peaking in adulthood (De Sonneville et al., 2002; Rump, Giovannelli,Minshew, & Strauss, 2009). Identifying emotional expressions may be difficult as it requires attendance to smallinconsistencies in facial features, such as slight upward or downward curling of the mouth (Jemel, Mottron, & Dawson,2006). The ability to recognise facially expressed emotions may not be as reliant on configural processing as are faces (Rumpet al., 2009), since emotions sometimes can be recognised based on information obtained from facial features (Sullivan,Ruffman, & Hutton, 2007). However, configural processing integrates information regarding the spatial relationshipsbetween facial features and is often more successful when it comes to interpreting information from faces (Tanaka & Farah,1993), but develops more gradually than featural processing, which relies on information from each facial feature separately(Maurer, Le Grand, & Mondloch, 2002; Mondloch, Le Grand, & Maurer, 2002b). It has been suggested that increased ability torecognise facially expressed emotions in adults is due to an increased subtlety in discriminating configural information(Rump et al., 2009) and adults have usually developed, and consequently employ, a configural processing strategy thatenables them to detect subtle changes in faces as varying emotions (Maurer et al., 2002; Mondloch et al., 2002a, 2002b;Thomas, De Bellis, Graham, & LaBar, 2007). In contrast, evidence suggests that people with autism spectrum disorders (ASD)supposedly favour featural, over configural, processing (Behrmann et al., 2006; Frith, 1994; Neumann, Spezio, Piven, &Adolphs, 2006).

ASD, which include HFA and AS, are generally characterised by difficulties with social interaction and socialcommunication (American Psychiatric Association, 2000). It has been proposed that individuals with HFA/AS employ a moreeffortful and less automatic process when recognising facially expressed emotions, with a propensity to fixate, and thereforeprocess, particular features of the face, such as the mouth, separately (Behrmann et al., 2006; Neumann et al., 2006). Thispropensity to process local stimuli independently, instead of processing the related parts as a global image, has been labelledweak central coherence (Frith, 1994; Happe & Frith, 2006). Studies involving eye tracker technology support the notion thatpeople with HFA/AS attend less to the eye area, and more to the mouth region, than controls1 when searching for informationduring emotion recognition (Baron-Cohen et al., 1997; Neumann et al., 2006). However, no or minimal between-groupdifferences were found in other studies involving static stimuli, i.e., people with HFA/AS did, in fact, use information from theeyes, and the number of fixations on the eye area were greater than that on the mouth region (Neumann et al., 2006;Rutherford & Towns, 2008). Despite these contradictory patterns, these studies support the notion that individuals with HFA/AS experience difficulties in identifying emotions from facial expressions, and attend more to the mouth and less to the eyes,as the stimuli being presented, and/or the situation, become more complex or dynamic (Rutherford & Towns, 2008).

Studies comparing the accuracy between children and adolescents with HFA/AS and controls when recognising simpleand complex facially expressed emotions found no significant between-group differences (Farran, Branson, & King, 2011;Grossman, Klin, Carter, & Volkmar, 2000; Rump et al., 2009; Tracy, Robins, Schriber, & Solomon, 2011). In contrast, Gross(2004) found that children with ASD were less able to recognise emotions than their matched controls. Analogous to thevisual search strategies of adults with ASD, children and adolescents with ASD attend less to the eyes and focus more on themouth when recognising facially expressed emotions (de Wit, Falck-Ytter, & von Hofsten, 2008; Gross, 2004; Grossman &Tager-Flusberg, 2008). Similarly, Tracy et al. (2011) hypothesised that children and adolescents with HFA/AS woulddemonstrate slower recognition of emotions than controls, due to their more effortful and less automatic method ofprocessing; however, no significant differences were found. It has been suggested that discrepancies between resultsregarding emotion recognition may be due to a great variety of task demands, differences in measurements and thedemographic characteristics of the participants in the studies (Harms, Martin, & Wallace, 2010). If the task at hand allows forcompensatory strategies to be employed and the outcome variable is solely recognition accuracy, labelling or matchingemotions, differences in processing strategies may not be obvious in persons with HFA/AS. Hence, different task demandsmay explain disparities in measurements of emotion recognition in this group. Furthermore, the ability to recogniseemotions seems to be dependent on age and on verbal and non-verbal abilities in children with ASD, and different matchingmay therefore result in discrepancy between results in studies in this area (Harms et al., 2010).

Falkmer, Bjallmark, Larsson, and Falkmer (2011b) used static stimuli and eye tracking technology to explore theemotion recognition ability and visual search strategies, i.e., the number of fixations and fixation durations, of adults with

1 Throughout the present study, the term controls refers to persons without any autism spectrum disorder.

Page 3: Facial emotion recognition and visual search strategies of children with high functioning autism and Asperger syndrome

D. Leung et al. / Research in Autism Spectrum Disorders 7 (2013) 833–844 835

HFA/AS. The study showed that adults with HFA/AS experienced greater difficulties than their matched controls whenrecognising facially expressed emotions, especially when the eyes were bisected. They also found that adults with HFA/ASfocus more often on the mouth and other parts of the face, compared with the controls, while controls attended more to theeyes than the participants with HFS/AS. Furthermore, the controls fixated longer on the mouth than those with HFA/AS,when learning the emotion in the puzzle piece slides, while the adults with HFA/AS fixated longer on other parts of the face(other than the eyes) when recognising the emotion in the choice slides (Falkmer et al., 2011b). The present study aimed toreplicate and extend the previous study of Falkmer et al. (2011b) with children with HFA/AS, in order to determine whetherthe aforementioned tendencies were present in them; thus, adding to the current knowledge base regarding emotionrecognition in people with HFA/AS across the lifespan. Consequently, the present study intended to answer the followingresearch questions:

(a) A

re children with HFA/AS less proficient than their matched controls at recognising facially expressed emotions? (b) A re children with HFA/AS impacted by changes to the presentation of the eye area, i.e., whether the eyes are bisected or

whole, when recognising emotions?

(c) D o children with HFA/AS differ from controls in the number of fixations, by displaying a preference towards focusing on

the mouth and other parts of the face (other than the eyes) when recognising emotions?

(d) D o children with HFA/AS differ from controls in fixation durations, and fixate longer on other parts of the face (other than

the eyes) than controls when recognising emotions?

1. Methods

1.1. Research participants

Participants were recruited through the Telethon Institute for Child Health Research, local primary schools, personalcontacts, and local radio and newspaper advertisements in the Perth metropolitan area in Western Australia. The inclusioncriteria specified children with HFA/AS who were able to read and understand written and verbal instruction in English. Theirmedical records were sighted to confirm their diagnosis and that they were diagnosed by authorised and experiencedprofessionals in independent medical clinics in accordance with internationally recognised criteria. Children with diagnosedcomorbid cognitive impairments or developmental disorders were excluded based on these conditions being documented intheir medical record. A total of 22 boys and 4 girls with HFA/AS, aged 8–12 years (mean age of 10.6 years, standard deviation(SD) 1.3, median 10.8 years), and controls (22 boys and 4 girls, mean age of 10.8 years, SD 1.1, median 11.1 years) participatedin the study. There was no difference in age between boys (mean age of 10.8 years, SD 1.7, median 11.1 years) and girls (meanage of 10.1 years, SD 0.8, median 9.8 years) (Z = 1.930, p = .054).

1.2. Apparatus

A head-mounted Arrington ViewPointTM eye tracker was used to measure the movements of the participants’ eyes in60 Hz, i.e., the number of fixations and fixation durations. The eye tracker was worn over participants’ glasses whennecessary. The eye tracker was set to record movements of one eye only, usually the right, following a visual trackingscreening to verify that the participants’ eyes moved smoothly and congruently. The participants were not explicitly testedto determine their dominant eye since the role of the dominant eye in visual processes where both eyes are used is not clear(Mapp, Ono, & Barbeito, 2003). However, all participants were tested for strabismus or amblyopia through a face-to-faceprocedure. Furthermore, the calibration procedure, together with the repeated calibration check between each set ofstimulus ensured that participants with a left eye as dominant attracted the attention of the researchers, and the eyetracker’s left camera was then used. In order to reduce systematic error and increase the accuracy of the results, theparticipants’ heads were stabilised using an Arrington Ultra Precision Head PositionerTM, and the eye tracker cables weresecured to a chair and a makeshift hat. Participants were seated approximately one metre from a 4200 flat screen television,which displayed the stimuli. A 16-point calibration of the eye tracker was conducted prior to commencing the test items; andin order to ensure that calibration was maintained throughout the trial, participants were instructed to fixate on a black doton a white slide, which was presented between each of the slides displaying the stimuli.

1.3. The stimuli

The participants were shown 12 pairs of slides (see Fig. 1), which displayed colour photos of faces (six male and sixfemale). They expressed one of three basic emotional expressions, i.e., anger, happiness and surprise (Ekman, 1992). Theseslides were the same stimuli used in the previous study involving adults with HFA/AS (Falkmer et al., 2011b). The first slide ineach pair showed a photo of a face expressing one of the three basic emotions, which had been broken up into six puzzlepieces. The second slide showed three photos of the same person from the preceding puzzle piece slide, expressing each ofthe aforementioned emotions. The photo that was broken up into puzzle pieces in the first slide was never identical to thephoto with the matching emotion shown in the second slide. For example, as shown in Fig. 1, the person’s mouth may beopen in the puzzle pieced photo, but closed in the face with the corresponding emotion in the second slide. To determine the

Page 4: Facial emotion recognition and visual search strategies of children with high functioning autism and Asperger syndrome

Fig. 1. Each of the 12 pairs of slides was presented in the process illustrated above. Please note that the eyes in the puzzle piece slide (left) are whole

(contrary to the puzzle piece slide presented in Fig. 2), and that the surprised face in the choice slide (right) is not identical to the puzzle pieced photo in the

first slide.

Fig. 2. Left: an example of how the puzzle pieces in the first slide were classified into the various areas of interest. Clockwise from the top-left, the puzzle

pieces were classified as eye, mouth, other, mouth, eye, and eye. Please note that the eyes in this puzzle piece slide are bisected. Right: an example of how

the three photos in the choice slide were classified. The numbers 1 and 2 indicate the eye area and mouth area, respectively. Any fixations on the face outside

these areas were classified as other (part of the face), while fixations not on the face were classified as elsewhere.

D. Leung et al. / Research in Autism Spectrum Disorders 7 (2013) 833–844836

importance of, and reliance on, information from the eye area, six of the 12 puzzle pieced photos were sectioned so the eyeswere bisected (see Fig. 2), while the remaining six depicted whole eyes (Fig. 1).

The participant identified the face in the second slide with the emotion matching that shown in the first slide (puzzlepieces). Identification was recorded as correct or incorrect.

1.4. Procedures

Following a visual acuity test, which all participants successfully passed, they were shown the 12 pairs of slides. Theprocedures replicated that of the previous study involving adults with HFA/AS (Falkmer et al., 2011b).

1.5. Classification of fixation areas

As shown in Fig. 2, each puzzle piece in the first slide of each pair was classified into one of three different areas of interest,i.e., mouth, eyes, or other (part of the face), depending on the most prominent facial structure featured in the puzzle piece.For the second slide, depicting the three alternative faces, fixations were also classified by the same three areas of interest.

Page 5: Facial emotion recognition and visual search strategies of children with high functioning autism and Asperger syndrome

D. Leung et al. / Research in Autism Spectrum Disorders 7 (2013) 833–844 837

Fixations not focusing on the aforementioned areas of interest were classified as ‘‘elsewhere’’. The location of theparticipants’ fixations were determined by the same fixation generation programme (Falkmer, Dahlman, Dukic, Bjallmark, &Larsson, 2008) used in the previous study with adults with HFA/AS (Falkmer et al., 2011b). With an inter-rater reliability of94.5%, determined by a random set of 100 centroid mode generated fixations between the first author and two independentraters, the number fixations and fixation durations were manually coded by multiple raters, according to these areas ofinterest.

1.6. Statistical analyses

Mean number of accurate responses within emotion and bisected vs. whole eyes were calculated and compared betweenchildren with HFA/AS and controls using the Mann–Whitney U-test. Accuracy was also modelled as a function of emotion,bisected vs. whole eyes and group belonging by using a General Estimating Equation (GEE). This comprehensive modelincorporated all the data (i.e., all 12 pairs of slides among the 26 + 26 participants, generating 624 records that were classifiedas ‘correct’ or ‘incorrect’) and took into account the correlations between the answers belonging to the same subject. Theresults from the model included p-values comparing accuracy across emotions, bisected vs. whole eyes and group belonging.

Between-group differences relating to the number of fixations were analysed using a random effects regression model.This model was used so that correlations between observations made by the same child were able to be taken into account(the child was named as the ‘random effect’). As the number of fixations was not normally distributed, a log transformationwas performed prior to analysis to improve the assumption of normality.

The average duration of fixations (in milliseconds) were found to be not normally distributed and analysed using theMann–Whitney U-test. Bonferroni corrections were applied where necessary. Effect size was calculated using Cohen’s d andRosenthal’s r. With an a-value of .05 and a b-value of .2, the 26 participants in each group allowed for a Cohen’s d of 0.78 orlarger to be detected. Statistical analyses were performed using the IBM SPSS version 20 and SAS version 9.2 (SAS InstituteInc., Cary, NC, USA, 2008) statistical software packages.

1.7. Ethical considerations

This study was approved by the Human Research Ethical Committee of Curtin University (approval numbers OTSW-03-2011 and OTSW-10-2011). Each participant and their parent/guardian were provided with information sheets detailing theobjectives, procedures and ethical considerations of the study. Participation was completely voluntary, and participantswere free to withdraw at any time without explanation. Informed assent from the participants and written informed consentfrom their parents/guardians were obtained prior to commencing the trial. Participants were given two cinema tickets astokens of appreciation for participating in the study.

2. Results

The number of accurate responses is presented, followed by the number of accurate responses when the eyes werebisected or whole, the number of fixations and fixations durations.

2.1. Number of accurate responses

As shown in Fig. 3, there was no difference between the groups in the total number of accurate responses (Z = 0.26,p = .795), with the mean number of accurate responses in the group with HFA/AS being 7.69 (SD 2.13) compared to 7.96 (SD1.95) in the controls.

2.2. Number of accurate responses when the eyes were bisected and whole

In Table 1, the results regarding the number of accurate responses when the eyes were bisected or whole in the puzzlepiece slides are presented.

The Mann–Whitney U-test identified no differences between the HFA/AS and Control groups, regardless of the emotionexpressed and whether the eyes were bisected or whole. The GEE model showed that the accuracy was significantly betterwhen the eyes were whole (accuracy = 73%) rather than bisected (57%) (p < .001). While there appeared to be no significantdifference in accuracy between happy (55%) and angry (63%) emotions (p = .23), the accuracy for the surprised emotion (78%)appeared significantly better than each of these (p < .005 for both comparisons). These p-values were obtained from amultivariate model after adjustment for the group belonging, which was not significant (consistent with the Mann–WhitneyU-test above: accuracy for controls = 66%, HFA/AS = 64%; p = .63).

2.3. Number of fixations

The combined sample made a total of 23,410 fixations, with the group with HFA/AS accounting for 11,857 of the totalnumber of fixations, and controls accounting for 11,553. The p-values provided for the number of fixations are based on log

Page 6: Facial emotion recognition and visual search strategies of children with high functioning autism and Asperger syndrome

Fig. 3. The median, range and inter-quartile range for the total number of accurate responses in both groups; 12 being the maximum number of accurate

responses.

Table 1

The mean number of accurate responses for both groups related to the emotion expressed and whether the eyes were bisected or whole. The maximum

number of correct answers is 2.

Mean number of accurate responses (range 0–2)

HFA/AS Controls p-Values*

Surprise, bisected eyes 1.35 (SD 0.75) 1.35 (SD 0.63) .856

Surprise, whole eyes 1.62 (SD 0.5) 1.92 (SD 0.27) .009

Anger, bisected eyes 1.15 (SD 0.73) 1.19 (SD 0.69) .865

Anger, whole eyes 1.42 (SD 0.5) 1.23 (SD 0.65) .312

Happiness, bisected eyes 0.85 (SD 0.83) 1.00 (SD 0.69) .434

Happiness, whole eyes 1.31 (SD 0.74) 1.27 (SD 0.78) .889

* Bonferroni correction of .0083.

D. Leung et al. / Research in Autism Spectrum Disorders 7 (2013) 833–844838

transformed data. The means, standard deviations, minimums, medians, and maximums provided are, however, based onthe raw data (Table 2). No significant between-group difference was found in the total number of fixations (p = .41).

However, for the combined sample including all parameters, there was a difference in the number of fixations on thevarious areas of interest (p < .001).

As shown in Table 3, the combined sample’s fixations were mostly concentrated on the eyes, which differed significantlywhen compared to each of the other areas of interest (p < .001). In descending order, fixations were then focused on themouth (which differed from other (part of the face) and elsewhere; p < .001), followed by other parts of the face andelsewhere on the screen (which differed from each other; p < .001). Differences were found in the number of fixations madeby all participants in relation to the emotion expressed (p < .001). However, a significant difference was only found whenanger was compared to happiness (p < .001), and surprise was compared to happiness (p = .001). Furthermore, there weredifferences in the number of fixations made by all participants depending on whether the eyes were bisected or whole(p = .025), and when viewing the puzzle piece slides or choice slides (p < .001). Note that while Table 3 summarises thenumber of fixations across both groups, all p-values quoted above were adjusted for the group belonging (Control or HFA/AS), which was not statistically significant (p = .80 after adjustment for all other variables).

2.4. Fixation durations

This section is divided into an initial analysis of the puzzle piece slides (Table 4) and then a combined puzzle piece andchoice slide analysis (Table 5). As shown in Table 4, on average, the group with HFA/AS fixated longer on each of the various

Page 7: Facial emotion recognition and visual search strategies of children with high functioning autism and Asperger syndrome

Table 2

The mean, SD, minimum, median, and maximum number of fixations for both groups, across all parameters shown in Table 3.

Mean SD Minimum Median Maximum

HFA/AS 9.35 8.22 0.00 7.00 63.00

Controls 9.23 8.09 0.00 8.00 69.00

Table 3

The mean, SD, minimum, median, and maximum number of fixations for the combined sample related to the areas of interest, the emotion expressed, and

whether the eyes were bisected or whole, and when viewing puzzle piece slides or choice slides.

Mean SD Minimum Median Maximum

Areas of interest

Eyes 15.15 9.36 0.00 14.00 69.00

Mouth 9.58 5.94 0.00 9.00 32.00

Other 8.40 7.22 0.00 7.00 63.00

Elsewhere 3.95 5.06 0.00 3.00 46.00

Emotion expressed

Anger 9.32 7.68 0.00 8.00 69.00

Happiness 9.02 8.41 0.00 7.00 39.00

Surprise 9.50 8.36 0.00 7.00 63.00

Eyes bisected or whole

Bisected 9.74 9.28 0.00 7.00 69.00

Whole 8.86 6.87 0.00 8.00 63.00

Puzzle piece or choice slides

Puzzle piece 10.81 8.36 0.00 10.00 46.00

Choice 7.71 7.62 0.00 6.00 69.00

Table 4

The average fixation duration (SD) in milliseconds for both groups on the various areas of interest related to whether the eyes were bisected or whole when

viewing the puzzle piece slides. Z values, r values and p values included.

Eyes Mouth Other (part of the face) Elsewhere

Bisected Whole Bisected Whole Bisected Whole Bisected Whole

HFA/AS 240 (170) 270 (288) 240 (203) 270 (388) 220 (182) 230 (193) 240 (212) 250 (211)

Controls 180 (277) 180 (288) 190 (285) 200 (302) 100 (209) 100 (227) 170 (217) 120 (189)

Z-value 24.25 21.46 17.11 16.97 24.77 24.17 6.85 9.58

r-Value 3.36 2.98 2.37 2.35 3.44 3.35 0.95 1.37

p-Value* <.001 <.001 <.001 <.001 <.001 <.001 <.001 <.001

* Bonferroni correction of .0063.

D. Leung et al. / Research in Autism Spectrum Disorders 7 (2013) 833–844 839

areas of interest than the controls (p < .001) when viewing the puzzle piece slides, regardless of whether the eyes werebisected or whole.

As shown in Table 5, the average fixation duration of the group with HFA/AS was significantly longer than that of thecontrols (p < .001), irrespective of the area of interest, the emotion, whether participants were viewing puzzle piece or choiceslides, and if the eyes were bisected or whole.

3. Discussion

The present study found no difference between the participant groups in the number of correctly identified emotions.This is in accordance with some previous research, which indicate that children and adolescents with HFA/AS are as accurateas their matched controls in identifying facially expressed emotions (Farran et al., 2011; Grossman et al., 2000; Rump et al.,2009; Tracy et al., 2011), as well as conflicts with Gross (2004), who found children with ASD were less able to recogniseemotions. Most eye-tracking studies suggest that persons with ASD differ in their processing of facially expressed emotions(Harms et al., 2010), but there are still inconsistent results regarding how persons with ASD execute visual strategies whilelooking at them.

The complexity of the emotion appears to affect the choice of visual strategies more in persons with ASD than in controls(Rutherford & Towns, 2008). The fact that no differences were found between the groups in the present study may be due tothe fact that only basic emotions were displayed as static photos. Furthermore, the study by Gross (2004) included childrenwith autism as opposed to children with HFA/AS. Since emotion recognition appears to be influenced by cognitivefunctioning (Harms et al., 2010), the result from the present study could be interpreted as being in accordance with other

Page 8: Facial emotion recognition and visual search strategies of children with high functioning autism and Asperger syndrome

Table 5

The average fixation duration (SD) in milliseconds for both groups on the various areas of interest related to the emotion expressed and whether the eyes

were bisected or whole when viewing the puzzle piece slides and choice slides. Z values, r values and p values included.

Bisected Whole Choice

Anger Happiness Surprise Anger Happiness Surprise Anger Happiness Surprise

Eyes

HFA/AS 240 (186) 240 (167) 230 (158) 290 (248) 280 (333) 280 (364) 240 (157) 250 (175) 240 (178)

Controls 230 (376) 170 (280) 150 (252) 220 (349) 170 (248) 230 (387) 150 (221) 160 (208) 150 (210)

Z value 8.57 13.20 13.13 8.56 11.28 7.90 11.91 10.02 11.18

r-Value 1.19 1.83 1.82 1.19 1.56 1.10 1.65 1.39 1.55

p-Value* <.001 <.001 <.001 <.001 <.001 <.001 <.001 <.001 <.001

Mouth

HFA/AS 250 (220) 270 (227) 250 (196) 310 (281) 310 (647) 260 (232) 220 (172) 210 (136) 220 (190)

Controls 170 (287) 210 (291) 220 (368) 270 (386) 220 (329) 200 (319) 120 (159) 150 (205) 150 (197)

Z value 8.91 7.00 8.35 6.36 7.56 9.05 8.68 6.29 7.58

r-Value 1.24 0.97 1.16 0.88 1.05 1.26 1.20 0.87 1.05

p-Value* <.001 <.001 <.001 <.001 <.001 <.001 <.001 <.001 <.001

Other (part of the face)

HFA/AS 200 (136) ** 210 (150) 230 (193) 180 (122) 190 (142) 220 (164) 240 (205) 230 (190)

Controls 140 (246) 150 (216) 130 (213) 130 (232) 130 (186) 110 (236) 100 (203) 100 (213)

Z value 4.98 4.56 9.82 7.23 6.00 17.51 20.67 21.84

r-Value 0.69 0.63 1.36 1.00 0.83 2.43 2.87 3.03

p-Value* <.001 <.001 <.001 <.001 <.001 <.001 <.001 <.001

Elsewhere

HFA/AS 230 (146) 240 (165) 260 (254) 250 (388) 250 (237) 230 (163) 230 (179) 240 (217) 260 (235)

Controls 100 (278) 140 (355) 100 (175) 120 (348) 120 (250) 110 (438) 160 (220) 140 (211) 140 (189)

Z value 9.87 9.45 9.12 10.47 8.39 11.46 5.63 6.91 7.36

r-Value 1.37 1.31 1.27 1.45 1.16 1.59 0.78 0.96 1.02

p-Value* <.001 <.001 <.001 <.001 <.001 <.001 <.001 <.001 <.001

* Bonferroni correction of .0014.

** Insufficient data available for statistical analysis.

Fig. 4. Number of accurate responses in the previous study (Falkmer et al., 2011b) with adults with HFA/AS and the present study.

D. Leung et al. / Research in Autism Spectrum Disorders 7 (2013) 833–844840

previous studies showing no differences in regard to the ability to accurately recognise emotions in children with HFA/AS(Harms et al., 2010).

Since the present study replicated the previous study with adults with HFA/AS (Falkmer et al., 2011b) and used the samestimuli, it is interesting to compare the number of accurate responses between the child and adult groups. As shown in Fig. 4,the mean number of accurate responses in the group of children with HFA/AS was similar to that of the group of adults withHFA/AS, while the adult controls in the previous study (Falkmer et al., 2011b) out-performed the child controls in the presentstudy. This supports, in part, Rump et al.’s (2009) assertion that emotion recognition ability in children with HFA/AS developsexponentially during late childhood to match that of children without ASD of their age, and their performance continues to

Page 9: Facial emotion recognition and visual search strategies of children with high functioning autism and Asperger syndrome

D. Leung et al. / Research in Autism Spectrum Disorders 7 (2013) 833–844 841

match that of controls throughout adolescence. However, unlike their matched controls, who become increasingly proficientin recognising emotions beyond adolescence and throughout adulthood, individuals with HFA/AS cease to develop. Theseresults add to the assumption that beyond late childhood, a change occurs in controls that does not occur in individuals withHFA/AS, which causes a discrepancy between the two groups in emotion recognition ability during adulthood. This increasedability to recognise faces and facially expressed emotions in adults may not only be the result of a shift from featural toconfigural face processing (Rump et al., 2009), but also due to an increased subtlety in discriminating configural information.The differences in the ability to identify faces and facially expressed emotions between adults with and without HFA/AS maybe the result of differences in the development during late childhood in which persons with HFA/AS do not fully obtain thesame subtlety in regard to configural information processing as persons without HFA/AS. This could possibly be due todifferences in visual strategies, which may result in that children with HFA/AS do not continue to improve their ability torecognise emotions, and therefore do not reach the face expertise level seen in adults without HFA/AS (Rump et al., 2009).Furthermore, visual strategies vary depending on the task at hand (Tatler, Wade, Kwan, Findlay, & Velichkovsky, 2010). Theparticipants in the present study were asked to perform a two-part task. In the first part (viewing puzzle piece slides), it couldbe assumed that they tried to obtain sufficient information to later on be able to recognise the emotion in the choice slides. Inthe second part, their aim was to recognise one of three emotions in the choice slides as the one previously viewed. Thepuzzle piece design restricted the use of a configural processing strategy in the first, information seeking part. The differencein emotion recognition in adults, but not in children when using the stimuli in the present study may indicate that increasedsubtlety in discriminating configural information develops later during the childhood years. Hence, the impact on enhancedemotion recognition abilities that such subtitle strategies may result in could not be measured in this group of youngparticipants. However, viewed faces are believed to be compared with stored mental prototypes that develop throughexposure and experience (Sullivan et al., 2007). The same may be true also for facially expressed emotions. Difficulties inrecognising facially expressed emotions may therefore not only be a result of differences in visual strategies. Less developedmental prototypes of facially expressed emotion in persons with ASD could also have an impact on the development ofemotion recognition skills. The impact of fewer mental representations may be more pronounced in adults, since childrenare still developing their mental storage of facial prototypes.

In the previous study (Falkmer et al., 2011b), only the adults with HFA/AS showed a decline in the number of accurateresponses when the eyes were bisected in the puzzle piece slides. In contrast, the number of correct answers of both thegroup of children with HFA/AS and the controls in the present study was lower when the eyes were bisected. Previous studies(de Wit et al., 2008; Gross, 2004; Grossman & Tager-Flusberg, 2008) found that children and adolescents with ASD attendless to the eye area and focus more on the mouth when recognising facially expressed emotions. The results from the presentstudy challenge such conclusions, because if what they have postulated holds true, then the group with HFA/AS would haveperformed no differently regardless of whether the eyes were bisected or whole. Furthermore, the fact that both the groupwith HFA/AS and the controls were negatively affected when the eyes were bisected may be due to both groups continuing tofavour featural processing throughout late childhood, and not yet having developed the ability to process configuralinformation.

However, comparing the results between the previous (Falkmer et al., 2011b) and present study; both the children andadults with HFA/AS experienced difficulties in recognising emotions when the eyes were bisected, while in the controlgroups, only the children were impacted. This is in accordance with previous studies (De Sonneville et al., 2002; Rumpet al., 2009; Thomas et al., 2007), which show that the emotion recognition ability of controls, unlike those with HFA/AS,become highly developed and peak throughout adulthood. The findings suggest that while controls eventually develop,and become proficient in, configural processing beyond late childhood and adolescence, individuals with HFA/AS eitherdo not develop configural processing, or do not become skilled in processing configural information when recognisingemotions.

In contrast to the results found in the previous study with adults with HFA/AS (Falkmer et al., 2011b), which showed thatadults with HFA/AS fixate less on the eyes and more on other parts of the face when identifying facially expressed emotionscompared with controls, there was no difference between the groups in the total number of fixations on any part of the face.This indicates that during late childhood, individuals with HFA/AS practice similar visual search strategies when recognisingemotions as that of their matched controls. Although these strategies appear to change in individuals without HFA/AS duringadulthood, some studies would suggest that adults with HFA/AS prefer information from the mouth only when emotions arecomplex and the social contexts are dynamic (Baron-Cohen et al., 1997; Rutherford & Towns, 2008; Speer, Cook, McMahon, &Clark, 2007).

Since there was no between-group difference in the total number of fixations, additional multivariate statistical analysestook into account both independent groups as one combined sample. Participants were shown the puzzle piece slides for tenseconds and instructed to identify the emotion being expressed. When viewing the choice slides, they were asked to taketheir time to recognise which emotion was previously shown as puzzle pieces. Eye movements appear to play a role inlearning faces and facially expressed emotions (Henderson, Williams, & Falk, 2005). The eye movement patterns generatedwhen individuals learn a face or an emotion do not mirror the patterns when individuals are recognising a face or anemotion; in fact, they lessen and become limited during the recognition stage. The findings of the present study are inaccordance with Henderson et al.’s (2005) conclusions, as the number of fixations for the combined sample was higher whenparticipants were learning the emotion shown in the puzzle piece slides, compared to when they were recognising theemotion from the three alternative faces in the choice slides.

Page 10: Facial emotion recognition and visual search strategies of children with high functioning autism and Asperger syndrome

D. Leung et al. / Research in Autism Spectrum Disorders 7 (2013) 833–844842

The combined sample showed a preference for the eyes and mouth over other parts of the face and elsewhere on thescreen, with most of the fixations amalgamated on the eye area. This is contradictory to Tracy et al.’s (2011) study, whichconcluded that children and adolescents with HFA/AS employ a more effortful and less automatic method of processingfacially expressed emotions that differs from that of controls. The results of the present study indicate that both groupsconsider the eyes to be the most important source of information when recognising facially expressed emotions.

Happiness is thought to be the easiest facially expressed emotion to identify because it is the initial emotional expressionthat infants learn and are able to categorise (Bornstein & Arterberry, 2003). The expression of happiness can be recognised byinformation from the mouth alone (Sullivan et al., 2007). In contrast, surprise can be determined by attending to the eyes andmouth, while anger requires information from the eyes, mouth and forehead (Rump et al., 2009; Sullivan et al., 2007). Thismay account for the combined sample making a lesser number of fixations when viewing slides that showed happy faces, incomparison to slides that showed angry or surprised faces, as they needed only to focus on one facial feature, i.e., the mouth.It also adds evidence to individuals, including those with HFA/AS, having a happy face advantage (Farran et al., 2011;Grossman et al., 2000).

As previously mentioned, the study participants may not yet have developed the ability to process configural information,and therefore will prefer featural processing when recognising emotions. Hence, as it is more difficult to process the eyefeature when they are bisected or incomplete, both the group with HFA/AS and the controls fixated more often when viewingpuzzle piece slides where the eyes were bisected, in order to gather adequate information to identify the emotion. However,from these results, it is not clear whether the participants attended to other facial features, such as the mouth and other partsof the face, as a result of the eyes being bisected, or whether they merely looked more often at the bisected eyes.

Fixation durations indicate how difficult stimuli are in a top-down task, or how interesting they are in a bottom-up task.The more difficult or interesting the stimuli, the longer the fixation durations (DeAngelus & Pelz, 2009). Although there wasno overall difference between the groups in the number of accurate responses, the group with HFA/AS displayed longerfixations than the controls. This suggests that while children with HFA/AS may be able to recognise facially expressedemotions as accurately as their matched controls, they find this top-down task they carried out more demanding. Why thenwas there no difference between the groups in the number of accurate responses? One possible explanation for this may be alimitation in the methodology. Allowing participants an unlimited time to view stimuli may not validly reflect the demandsplaced on individuals when recognising emotions in real-life situations, as emotional expressions and interactions are brief(Rump et al., 2009). While participants were limited in the amount of time they were permitted to view the puzzle pieceslides, when viewing the choice slides, the instruction was to take their time to recognise the emotion previously seen in thepuzzle piece slides as accurately as possible. Thus, had the present study factored in response time, i.e., set a time limit withinwhich participants were to respond, it may have noted a difference between the participant groups in the number of accurateresponses; because while the group with HFA/AS may be able to provide an accurate response, they may rely oncompensatory strategies and experience difficulties in rapidly providing an accurate response, which is required in real-lifesituations.

Consequently, these results suggest the necessity and opportunity for interventions to enable individuals with HFA/AS tobetter adapt their visual search strategies to a range of social situations and tasks, such as school and work, so that they areable to manage the social demands of everyday life. For example, one factor that may contribute to individuals with ASD’sdecreased emotion recognition ability is the lack of mental representations of different emotional expressions against whichthey compare what they see (Humphreys, Minshew, Leonard, & Behrmann, 2007; Rump et al., 2009). Therefore, one possibleintervention may be to increase children with HFA/AS’s exposure to emotion exemplars, in order to increase their internalrepresentation of different emotions. Other direct interventions found to produce measurable improvements in emotionrecognition ability and social skills include computer software, most of which are designed for use with children, and group-based training (Herbrecht et al., 2009; Hopkins et al., 2011; Lacava, Golan, Baron-Cohen, & Myles, 2007; Lopata et al., 2010;Ramdoss et al., 2012; Stichter et al., 2010). However, learning and rehearsing the emotional and social skills introducedthrough the aforementioned interventions in real-life settings is essential to the generalisation of the skills (Hopkins et al.,2011; Lopata et al., 2010; Ramdoss et al., 2012). Implementing interventions as early in childhood as possible may alsocontribute to lessening the discrepancy in emotion recognition between the groups in adulthood. However, prior to this, thepresent study needs to be replicated with adolescents with HFA/AS, in order to determine whether individuals with HFA/AScontinue to perform at the same level as that of their matched controls throughout adolescence, as well as to establishprecisely when between-group differences in performance and visual search strategies begin to emerge, i.e., duringadolescence or early adulthood.

4. Limitations

Inconsistencies have been found in the visual search strategies used by individuals with HFA/AS during emotionrecognition when viewing static stimuli as opposed to dynamic stimuli (Rutherford & Towns, 2008). It has also beenpostulated that dynamic stimuli better replicate real-life situations, and so produce more accurate results regarding visualprocessing during emotion recognition in day-to-day living (Klin, Jones, Schultz, Volkmar, & Cohen, 2002). Contrary to thispostulate, visual search strategies in adults, with and without HFA/AS, appear to remain consistent when viewing both staticand dynamic stimuli (Falkmer, Bjallmark, Larsson, & Falkmer, 2011a). However, whether the same applies to childrenremains to be investigated. A further limitation of the present study is that the participating children in the two groups were

Page 11: Facial emotion recognition and visual search strategies of children with high functioning autism and Asperger syndrome

D. Leung et al. / Research in Autism Spectrum Disorders 7 (2013) 833–844 843

matched on age but not verbal and non-verbal IQ (Harms et al., 2010). Nevertheless, the use of the same stimuli as in aprevious study with adults with HFA/AS allows for a comparison of the results from these two studies. Therefore, the resultsfrom the present study add knowledge regarding the trajectory of the development of emotion recognition, which has beendefined as lacking (Harms et al., 2010).

At times, it was difficult to obtain a perfect calibration of the eye tracker. Additionally, participants shifting throughoutthe trial may have impacted on the calibration. To account for this, a black dot on a white slide was shown between each ofthe slides displaying stimuli, and the participants were instructed to focus on the black dot when it appeared. The distancebetween the fixation dot and the black dot was continuously monitored, and if it became too large and/or the participantrequested a break, the eye tracker was re-calibrated. Consequently, when manually classifying the fixation areas, thediscrepancy between participants’ fixation dot and the black dot, which is where the true fixation was assumed to be, wastaken into account in the analyses. If an adequate calibration was unable to be obtained before and throughout the trial, thedata were not used. Although multiple raters classified the fixation areas, which posed a threat to the inter-rater reliability,an overall inter-rater reliability of 94.5% was achieved, as mentioned.

Finally, although the groups were matched in age and sex, convenience and snowball sampling methods yielded a sampleof volunteers, which may have caused selection bias. The participants may not be representative of the general population,and may reflect a relatively ‘‘well-functioning’’ group; thus, potentially affecting the external validity of the results.

5. Conclusions

Demonstrating comparable emotion recognition ability, children with HFA/AS were as accurate as their matched controlsin recognising facially expressed emotions. The performances of both the group with HFA/AS and the controls were betterwhen the eyes were presented as whole in the puzzle piece slides. Moreover, both groups displayed a preference for the eyes,with most of their fixations being concentrated on the eye area. These results indicate that children with HFA/AS, as well ascontrols, consider the eyes to be most important during emotion recognition. However, children with HFA/AS displayedlonger fixations than that of the controls, suggesting that they may find the task of emotion recognition more challenging.

Acknowledgements

The research was supported by the Autism Association of Western Australia; the Telethon Institute of Child HealthResearch; and the Catholic Education Office of Western Australia. We are grateful to the participants and their families, KatieAnderson for assisting with the data collection, Tim Parkin from AIM Employment and Anette Wallerman from AUW-Konsultfor additional data processing.

References

American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders. Text revision (DSM-IV-TR) (4th ed.). Washington: AmericanPsychiatric Publishing.

Back, E., Ropar, D., & Mitchell, P. (2007). Do the eyes have it? Inferring mental states from animated faces in autism. Child Development, 78(2), 397–411.Baron-Cohen, S., Wheelwright, S., & Jolliffe, T. (1997). Is there a ‘‘language of the eyes’’? Evidence from normal adults and adults with autism or Asperger

syndrome. Visual Cognition, 4(3), 311–331.Behrmann, M., Avidan, G., Leonard, G. L., Kimchi, R., Luna, B., Humphreys, K., et al. (2006). Configural processing in autism and its relationship to face processing.

Neuropsychologia, 44(1), 110–129.Bornstein, M. H., & Arterberry, M. E. (2003). Recognition, discrimination and categorization of smiling by 5-month-old infants. Developmental Science, 6(5), 585–

599.De Sonneville, L. M. J., Verschoor, C. A., Njiokiktjien, C., Op het Veld, V., Toorenaar, N., & Vranken, M. (2002). Facial identity and facial emotions: Speed, accuracy,

and processing strategies in children and adults. Journal of Clinical and Experimental Neuropsychology, 24(2), 200–213.de Wit, T. C. J., Falck-Ytter, T., & von Hofsten, C. (2008). Young children with autism spectrum disorder look differently at positive versus negative emotional faces.

Research in Autism Spectrum Disorders, 2(4), 651–659.DeAngelus, M., & Pelz, J. B. (2009). Top-down control of eye movements: Yarbus revisited. Visual Cognition, 17(6/7), 790–811.Ekman, P. (1992). Are there basic emotions? Psychological Review, 99(3), 550–553.Falkmer, M., Bjallmark, A., Larsson, M., & Falkmer, T. (2011a). The influences of static and interactive dynamic facial stimuli on visual strategies of persons with

Asperger syndrome. Research in Autism Spectrum Disorders, 5(2), 935–940.Falkmer, M., Bjallmark, A., Larsson, M., & Falkmer, T. (2011b). Recognition of facially expressed emotions and visual search strategies in adults with Asperger

syndrome. Research in Autism Spectrum Disorders, 5(1), 210–217.Falkmer, T., Dahlman, J., Dukic, T., Bjallmark, A., & Larsson, M. (2008). Fixation identification in centroid versus start-point modes using eye tracking data.

Perceptual and Motor Skills, 106(3), 710–724.Farran, E. K., Branson, A., & King, B. J. (2011). Visual search for basic emotional expressions in autism: Impaired processing of anger, fear and sadness, but a typical

happy face advantage. Research in Autism Spectrum Disorders, 5(1), 455–462.Frith, U. (1994). Autism: Explaining the enigma. Oxford: Basil Blackwell Ltd.Goodfellow, S., & Nowicki, S. (2009). Social adjustment, academic adjustment, and the abiltiy to identify emotion in facial expressions of 7-year-old children. The

Journal of Genetic Psychology, 170(3), 234–244.Green, S., & Baker, B. (2011). Parents’ emotion expression as a predictor of child’s social competence: Children with or without intellectual disability. Journal of

Intellectual Disability Research, 55(3), 324–338.Gross, T. F. (2004). The perception of four basic emotions in human and nonhuman faces by children with autism and other developmental disabilities. Journal of

Abnormal Child Psychology, 32(5), 469–480.Grossman, J. B., Klin, A., Carter, A. S., & Volkmar, F. R. (2000). Verbal bias in recognition of facial emotions in children with Asperger syndrome. Journal of Child

Psychology and Psychiatry, 41(3), 369–379.Grossman, R. B., & Tager-Flusberg, H. (2008). Reading faces for information about words and emotions in adolescents with autism. Research in Autism Spectrum

Disorders, 2(4), 681–695.

Page 12: Facial emotion recognition and visual search strategies of children with high functioning autism and Asperger syndrome

D. Leung et al. / Research in Autism Spectrum Disorders 7 (2013) 833–844844

Grossmann, T., & Johnson, M. H. (2007). The development of the social brain in human infancy. European Journal of Neuroscience, 25(4), 909–919.Happe, F., & Frith, U. (2006). The Weak coherence account: Detail-focused cognitive style in autism spectrum disorders. Journal of Autism and Developmental

Disorders, 36(1), 5–25.Harms, M., Martin, A., & Wallace, G. (2010). Facial emotion recognition in autism spectrum disorders: A review of behavioral and neuroimaging studies.

Neuropsychology Review, 20(3), 290–322.Henderson, J. M., Williams, C. C., & Falk, R. J. (2005). Eye movements are functional during face learning. Memory & Cognition, 33(1), 98–106.Herbrecht, E., Poustka, F., Birnkammer, S., Duketis, E., Schlitt, S., Schmotzer, G., et al. (2009). Pilot evaluation of the Frankfurt social skills training for children and

adolescents with autism spectrum disorder. European Child and Adolescent Psychiatry, 18(6), 327–335.Hopkins, I. M., Gower, M. W., Perez, T. A., Smith, D. S., Amthor, F. R., Wimsatt, F. C., et al. (2011). Avatar assistant: Improving social skills in students with ASD

through a computer-based intervention. Journal of Autism and Developmental Disorders, 41(11), 1543–1555.Humphreys, K., Minshew, N., Leonard, G. L., & Behrmann, M. (2007). A fine-grained analysis of facial expression processing in high-functioning adults with autism.

Neuropsychologia, 45(4), 685–695.Izard, C., Fine, S., Schultz, D., Mostow, A., Ackerman, B., & Youngstrom, E. (2001). Emotion knowledge as a predictor of social behavior and academic competence in

children at risk. Psychological Science, 12(1), 18–23.Jemel, B., Mottron, L., & Dawson, M. (2006). Impaired face processing in autism: Fact or artifact? Journal of Autism and Developmental Disorders, 36(1), 91–106.Klin, A., Jones, W., Schultz, R., Volkmar, F., & Cohen, D. (2002). Visual fixation patterns during viewing of naturalistic social situations as predictors of social

competence in individuals with autism. Archive of Genetic Psychiatry, 59(9), 809–816.Lacava, P. G., Golan, O., Baron-Cohen, S., & Myles, B. S. (2007). Using assistive technology to teach emotion recognition to students with Asperger syndrome: A pilot

study. Remedial and Special Education, 28(3), 174–181.Lopata, C., Thomeer, M. L., Volker, M. A., Toomey, J. A., Nida, R. E., Lee, G. K., et al. (2010). RCT of a manualized social treatment for high-functioning autism

spectrum disorders. Journal of Autism and Developmental Disorders, 40(11), 1297–1310.Mapp, A. P., Ono, H., & Barbeito, R. (2003). What does the dominant eye dominate? A brief and somewhat contentious review. Perception & Psychophysics, 65(2),

310–317.Maurer, D., Le Grand, R., & Mondloch, C. J. (2002). The many faces of configural processing. Trends in Cognitive Sciences, 6(6), 255–260.Mondloch, C. J., Le Grand, R., & Maurer, D. (2002a). Configural face processing develops more slowly than featural face processing. Perception & Psychophysics,

31(5), 553–566.Mondloch, C. J., Le Grand, R., & Maurer, D. (2002b). Configural face processing develops more slowly than featural face processing. Perception, 31, 553–566.Neumann, D., Spezio, M. L., Piven, J., & Adolphs, R. (2006). Looking you in the mouth: Abnormal gaze in autism resulting from impaired top-down modulation of

visual attention. Social Cognitive and Affective Neuroscience, 1(3), 194–202.Ramdoss, S., Machalicek, W., Rispoli, M., Mulloy, A., Lang, R., & O‘Reilly, M. (2012). Computer-based interventions to improve social and emotional skills in

individuals with autism spectrum disorders: A systematic review. Developmental Neurorehabilitation, 15(2), 119–135.Rump, K. M., Giovannelli, J. L., Minshew, N. J., & Strauss, M. S. (2009). The Development of Emotion Recognition in Individuals With Autism. Child Development,

80(5), 1434–1447.Rutherford, M. D., & Towns, A. M. (2008). Scan path differences and similarities during emotion perception in those with and without autism spectrum disorders.

Journal of Autism and Developmental Disorders, 38(7), 1371–1381.Speer, L., Cook, A. E., McMahon, W. M., & Clark, E. (2007). Face processing in children with autism. Autism, 11(3), 265–277.Stichter, J. P., Herzog, M. J., Visovsky, K., Schmidt, C., Randolph, J., Schultz, T., et al. (2010). Social competence intervention for youth with Asperger syndrome and

high-functioning autism: An initial investigation. Journal of Autism and Developmental Disorders, 40(9), 1067–1079.Sullivan, S., Ruffman, T., & Hutton, S. B. (2007). Age differences in emotion recognition skills and the visual scanning of emotion faces. The Journals of Gerontology

Series B: Psychological Sciences and Social Sciences, 62(1), 53–60 http://psychsocgerontology.oxfordjournals.org/content/62/1/P53.abstract.Tanaka, J. W., & Farah, M. J. (1993). Parts and wholes in face recognition. The Quarterly Journal of Experimental Psychology Section A, 46(2), 225–245.Tatler, B. W., Wade, N. J., Kwan, H., Findlay, J. M., & Velichkovsky, B. M. (2010). Yarbus, eye movements, and vision. Perception, 1(1), 7–27.Taylor, M. J., Batty, M., & Itier, R. J. (2004). The faces of development: A review of early face processing over childhood. Journal of Cognitive Neuroscience, 16(8),

1426–1442.Taylor, M. J., Edmonds, G. E., McCarthy, G., & Allison, T. (2001). Eyes first! Eye processing develops before face processing in children NeuroReport, 12(8), 1671–

1676 http://journals.lww.com/neuroreport/Fulltext/2001/06130/Eyes_first__Eye_processing_develops_before_face.31.aspx.Thomas, L. A., De Bellis, M. D., Graham, R., & LaBar, K. S. (2007). Development of emotional facial recognition in late childhood and adolescence. Developmental

Science, 10(5), 547–558.Tracy, J. L., Robins, R. W., Schriber, R. A., & Solomon, M. (2011). Is emotion recognition impaired in individuals with autism spectrum disorders? Journal of Autism

and Developmental Disorders, 41(1), 102–109.Turkstra, L. S., Williams, W. H., Tonks, J., & Frampton, I. (2008). Measuring social cognition in adolescents: Implications for students with TBI returning to school.

NeuroRehabilitation, 23(6), 501–509 http://iospress.metapress.com.Vicari, S., Reilly, J. S., Pasqualetti, P., Vizzotto, A., & Caltagirone, C. (2000). Recognition of facial expressions of emotions in school-age children: The intersection of

perceptual and semantic categories. Acta Paediatrica, 89(7), 836–845http://onlinelibrary.wiley.com.