The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical...

24
1 The face module emerges from domain-general visual experience: a deprivation study on deep convolutional neural network Shan Xu #* , Yiyuan Zhang # , Zonglei Zhen, & Jia Liu * 1 #: equal contribution 2 *: correspondence author 3 Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing 4 Normal University, Beijing 100875, China. 5 6 * Correspondence: 7 Jia Liu, Ph.D. or Shan Xu, Ph.D. or; Room 1415, New Main Building, 19 Xinjiekouwai St, Haidian 8 District, Beijing 100875, China. Tel.: +86-10-58806154; fax: +86-10-58806154. E-mail: 9 [email protected] or [email protected]. 10 11 was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which this version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407 doi: bioRxiv preprint

Transcript of The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical...

Page 1: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

1

The face module emerges from domain-general visual experience: a

deprivation study on deep convolutional neural network

Shan Xu#*, Yiyuan Zhang#, Zonglei Zhen, & Jia Liu* 1

#: equal contribution 2

*: correspondence author 3

Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing 4

Normal University, Beijing 100875, China. 5

6

* Correspondence: 7

Jia Liu, Ph.D. or Shan Xu, Ph.D. or; Room 1415, New Main Building, 19 Xinjiekouwai St, Haidian 8

District, Beijing 100875, China. Tel.: +86-10-58806154; fax: +86-10-58806154. E-mail: 9

[email protected] or [email protected]. 10

11

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 2: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

2

Abstract 12

Can faces be accurately recognized with zero experience on faces? The answer to this question is 13

critical because it examines the role of experiences in the formation of domain-specific modules in 14

the brain. However, thorough investigation with human and non-human animals on this issue cannot 15

easily dissociate the effect of the visual experience from that of genetic inheritance, i.e., the 16

hardwired domain-specificity. The present study addressed this problem by building a model of 17

selective deprivation of the experience on faces with a representative deep convolutional neural 18

network (DCNN), AlexNet. We trained a new AlexNet with the same image dataset, except that all 19

images containing faces of human and nonhuman primates were removed. We found that the 20

experience-deprived AlexNet (d-AlexNet) did not show significant deficits in face categorization and 21

discrimination, and face-selective modules also automatically emerged. However, the deprivation 22

made the d-AlexNet to process faces in a more parts-based fashion, similar to the way of processing 23

objects. In addition, the face representation of the face-selective module in the d-AlexNet was more 24

distributed and the empirical receptive field was larger, resulting in less degree of selectivity of the 25

module. In sum, our study provides undisputable evidence on the role of nature versus nurture in 26

developing the domain-specific modules that domain-specificity may evolve from non-specific 27

stimuli and processes without genetic predisposition, which is further fine-tuned by domain-specific 28

experience. 29

Keywords: face perception, face domain, deep convolutional neural network, visual deprivation, 30 experience31

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 3: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

1

1 Introduction 32

A fundamental question in cognitive neuroscience is how nature and nurture form our cognitive 33

modules. In the center of the debate is the origin of face recognition ability. Numerous studies have 34

revealed both behavioral and neural signatures of face-specific processing, indicating a face module 35

in the brain (for reviews, see Freiwald, Duchaine, & Yovel, 2016; Kanwisher & Yovel, 2006). 36

Further studies from behavioral genetics revealed the contribution of genetics on the development of 37

the face-specific recognition ability in humans (Wilmer et al., 2010; Zhu et al., 2010). Collectively, 38

these studies suggest an innate domain-specific module for face cognition. However, it is unclear 39

whether the visual experience is also necessary for the development of the face module. 40

A direct approach to address this question is visual deprivation. Two studies on monkeys 41

selectively deprived the visual experience of faces since birth, while leaving the rest of experiences 42

untouched (Arcaro, Schade, Vincent, Ponce, & Livingstone, 2017; Sugita, 2008). They report that 43

face-deprived monkeys are still capable of categorizing and discriminating faces (Sugita, 2008), 44

though less prominent in selective looking preference to faces over non-face objects (Arcaro et al., 45

2017). Further examination of the brain of the experience-deprived monkeys fails to localize typical 46

face-selective cortical regions with the standard criterion; however, in the inferior temporal cortex 47

where face-selective regions are normally localized, face-selective activation (i.e., neural responses to 48

faces larger than nonface objects) is observed (Arcaro et al., 2017). Taken together, without visual 49

experiences of faces, rudimental functions to process faces may still evolve to some extent. 50

Two related but independent hypotheses may explain the emergence of the face module without 51

face experiences. An intuitive answer is that the rudimental functions are hardwired in the brain by 52

genetic predisposition (McKone, Crookes, Jeffery, & Dilks, 2012; Wilmer et al., 2010). 53

Alternatively, we argue that the face module may emerge from experiences on nonface objects and 54

related general-purpose processes, because representations for faces may be constructed by abundant 55

features derived from nonface objects. Unfortunately, studies on humans and monkeys are unable to 56

thoroughly decouple the effect of nature and nurture to test these two hypotheses. 57

Recent advances in deep convolutional neural network (DCNN) provide an ideal test platform to 58

examine the role of visual experiences alone on face modules without genetic predisposition. 59

Previous studies have shown that DCNNs are similar to human visual cortex both structurally and 60

functionally (Kriegeskorte, 2015), but free of any predisposition on functional modules. Therefore, 61

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 4: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

2

with DCNNs we can manipulate experiences without considering interactions from genetic 62

predisposition. In this study, we asked whether DCNNs can achieve face-specific recognition ability 63

when visual experiences on faces were selectively deprived. 64

To do this, we trained a representative DCNN, AlexNet (Krizhevsky, 2014; Krizhevsky, 65

Sutskever, & Hinton, 2012), to categorize nonface objects with face images carefully removed from 66

the training dataset. Once this face-deprived DCNN (d-AlexNet) was trained, we compared its 67

behavioral performance to that of a normal AlexNet of the same architecture but with faces present 68

during training in both face categorization (i.e., differentiating faces from nonface objects) and 69

discrimination (i.e., discriminating faces among different individuals) tasks. We predicted that the d-70

AlexNet, though without predisposition and experiences of faces, may still develop face selectivity 71

through its visual experiences of nonface objects. 72

73

2 Materials and methods 74

2.1 Stimuli 75

Deprivation dataset The deprivation dataset was constructed to train the d-AlexNet. It was based on 76

the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) 2012 dataset (Deng et al., 2009), 77

which contains 1,281,167 images for training and 50,000 images for validation, in 1000 categories. 78

These images were first subjected to automated screening with an in-house face-detection toolbox 79

based on VGG-Face (Parkhi, Vedaldi, & Zisserman, 2015), and then further screened by two human 80

raters, who separately judged whether a given image contains faces of humans or non-human 81

primates regardless of the orientation and intactness of the face, or anthropopathic artwork, cartoons, 82

and artifacts. We removed images judged by either rater as containing any above-mentioned contents. 83

Finally, we removed categories whose remaining images were less than 640 images (approximately 84

half of the original number of images in a category). The resultant dataset consists of 736 categories, 85

with 662,619 images for training and 33,897 for testing the performance. 86

Classification dataset To train a classifier that can classify faces, we constructed a classification 87

dataset consisting of 204 categories of non-face objects and one face category, each of 80 exemplars. 88

For the non-face categories, we manually screened Caltech-256 (Griffin, Holub, & Perona, 2007) to 89

remove images containing human, primate, or cartoon faces, and then removed categories whose 90

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 5: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

3

remaining images were less than 80. In each of the 204 remaining non-face categories, we randomly 91

chose 70 images for training and another 10 for calculating classification accuracy. The face category 92

was constructed by randomly selecting 1000 faces images from Faces in the Wild (FITW) dataset 93

(Berg, Berg, Edwards, & Forsyth, 2005). Among them, 70 were used as training data and another 10 94

for classification accuracy. In addition, to characterize DCNN’s ability in differentiating faces from 95

object categories, we compiled a second dataset consisting of all images in the face category except 96

those used in training. 97

Discrimination dataset To train a classifier that can discriminate faces at individual level, we 98

constructed a discrimination dataset consisting of face images of 133 individuals, 300 images each, 99

selected from the Casia-WebFace database (Yi, Lei, Liao, & Li, 2014). For each individual in the 100

dataset, 250 were randomly chosen for training and another 50 for calculating discrimination 101

accuracy. 102

Representation dataset To examine representational similarity of faces and non-face images 103

between the d-AlexNet and the normal one, we constructed a representation dataset with two 104

categories, faces and bowling pins as an ‘unseen’ non-face object category that was not presented to 105

the DCNNs during training. Each category consisted of 80 images. The face images were a random 106

subset of FITW, and images of bowling pins were randomly chosen from the corresponding category 107

in Caltech-256. 108

Movies clips for DCNN-brain correspondence analysis We examined the correspondence between 109

the face-selective response of the DCNNs and brain activity using a set of 18 clips of 8-min natural 110

color videos from the Internet that are diverse yet representative of real-life visual experiences (Wen 111

et al., 2017). 112

2.2 The deep convolutional neural network 113

Our model of selective deprivation, the d-AlexNet, was built with the architecture of the well-known 114

DCNN ‘AlexNet’ (Krizhevsky et al., 2012, see Figure 1a for illustration). AlexNet is a feed-forward 115

hierarchical convolutional neural network consisting of five convolutional layers (denoted as Conv1 116

– Conv5, respectively) and three fully connected layers denoted as FC1 – FC3. Each convolutional 117

layer consists of a convolutional sublayer, followed by a ReLU sublayer, and Conv1, 2, and 5 are 118

further followed by a pooling sublayer. Each convolutional sublayer consists of a set of distinct 119

channels. Each channel convolves the input with a distinct linear filter (kernel) which extracts filtered 120

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 6: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

4

outputs from all locations within the input with a particular stride size. FC1 to FC3 are fully 121

connected layers. FC3 is followed by a sublayer using a softmax function to output a vector that 122

represents the probability of the visual input containing the corresponding object category 123

(Krizhevsky et al., 2012). 124

The d-AlexNet used the architecture of AlexNet but changed the number of units in FC3 to 736, 125

so was the following softmax function, to match the number of categories in the deprivation dataset. 126

Same to the pre-training AlexNet in pytorch 1.2.0 (https://pytorch.org/, Paszke et al., 2017), the d-127

AlexNet was initialized with values drawn from a uniform distribution, and was then trained on the 128

deprivation dataset following the approach specified in Krizhevsky et al., (2014). We used the pre-129

trained AlexNet from pytorch 1.2.0 as the normal DCNN, referred to as the AlexNet in this paper for 130

brevity. 131

The present study referred to channels in the convolutional sublayers by the layer they belong to 132

and a channel index, following the convention of pytorch 1.2.0. For instance, Layer 5-Ch256 refers to 133

the 256th convolutional channel of Layer 5. 134

2.3 Transfer learning for classification and discrimination 135

To examine to what extent our manipulation of the visual experience affected the categorical 136

processing of faces, we replaced the fully-connected layers of each DCNN with a two-layer face-137

classification classifier. The first layer was a fully connected layer with 43,264 units as inputs and 138

4,096 units as outputs with sigmoid activation function, and the second was a fully connected layer 139

with 4,096 units as inputs and 205 units as outputs, each of which corresponded to one category of 140

the classification dataset. This classifier, therefore, classified each image into one category of the 141

classification dataset. The face-classification classifier was trained for each DCNN with the training 142

images in the classification dataset for 90 epochs. 143

To examine to what extent our manipulation of the visual experience affected face 144

discrimination, we similarly replaced the fully connected layers of each DCNN with a discrimination 145

classifier. The discrimination classifier differed from the classification classifier only in its second 146

layer, which had 133 units instead as outputs, each corresponding to one individual in the 147

discrimination dataset. The face-discrimination classifier was trained for each DCNN with the 148

training images in the discrimination dataset for 90 epochs. 149

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 7: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

5

2.4 The face selective channels in DCNNs 150

To identify the channels selectively responsive to faces, we submitted images in the classification 151

dataset to each DCNN, recorded the average activation in each channel of Conv5 after ReLU in 152

response to each image, and then averaged the channel-wise activation within each category. We 153

selected channels where the face category evoked the highest activation, and used the Mann-Whitney 154

U test to examine the activation difference between faces and objects that had the second-highest 155

activation in these channels (p < .05, Bonferroni corrected). The selectivity of each face channel thus 156

identified was indexed by the selective ratio. The selective ratio was calculated by dividing the face 157

activation by the second-highest activation. In addition, we measured the lifetime sparseness of each 158

face-selective channel as an index for selectivity of faces among all non-face objects. We first 159

normalized the mean activations of a face channel in Layer5 to all the categories to the range of 0-1, 160

and then calculated lifetime sparseness with the formula: 161

2 21, 1,

( / ) / ( / )i ii n i nS r n r n

= == ∑ ∑

162

where ri is the normalized activations to the ith object category. The smaller this value is, the higher 163

the selectivity is. 164

2.5 DCNN-Brain Correspondence 165

We submitted the movie clips to the DCNNs. Following Wen (2017)’s approach, we extracted and 166

log-transformed the channel-wise output (the average activation after ReLU) of each face-selective 167

channel using DNNBrain, an in-house toolbox (Chen et al., 2020), and then convolved it with a 168

canonical hemodynamic response function (HRF) with a positive peak at 4s. The HRF convolved 169

channel-wise activity was then down-sampled to match the sampling rate of functional magnetic 170

resonance imaging (fMRI) and the resultant timeseries was standardized before further analysis. 171

Neural activation in the brain was derived from the preprocessed data in Wen (2017). The 172

fMRI data were recorded while human participants viewed each movie clips twice. We averaged the 173

standardized time series across repetition and across subjects for each clip. Then, for each DCNN, we 174

conducted multiple regression for each clip, with the activation time series of each brain vertex as the 175

dependent variable and that of face-selective channels in this network as independent variables. For 176

the d-AlexNet, all face-selective channels were included. For the AlexNet, we included the same 177

number of face-selective channels with the highest face selectivity to match the complexity of the 178

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 8: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

6

regression model. We used the R2 of each vertex as the index of the overall Goodness of fit of the 179

regression in that vertex. The R2 values were then averaged across clips. The larger the R2 value, the 180

higher correspondence between the DCNN and the brain in response to movie clips. 181

To determine whether cortical regions with large R2 values were traditional face-selective 182

regions, we delineated the bilateral fusiform face areas (FFA) and the occipital face area (OFA) with 183

the maximum-probability atlas of face-selective regions (Zhen et al., 2015). Two hundred of vertexes 184

of the highest probability of the left FFA and 200 of the right FFA were included in the ROI of FFA, 185

and the ROI of OFA was delineated in the same way. The correspondence with brain activation in 186

each ROI and the impact of the visual experience was examined by submitting the vertex-wise R2 187

into a two-way ANOVA with visual experience (d-AlexNet vs. AlexNet) as within-subject factor and 188

regional correspondence (OFA and FFA) as between-subject factor. 189

2.6 Face inversion effect in DCNNs 190

The average activation amplitude of the top 2 face-selective channels of each DCNN in response to 191

upright and inverted version of 20 faces from the Reconstructing Faces dataset (VanRullen & Reddy, 192

2019) was measured. The inverted faces were generated by vertically flipping the upright ones. The 193

face inversion effect in the d-AlexNet was measured with paired sample t-tests (two-tailed) and the 194

impact of the experience on the face inversion effect was examined by two-way ANOVAs with 195

visual experience (d-AlexNet vs. AlexNet) and inversion (upright vs inverted) as within-subject 196

factors. 197

2.7 Representational similarity analysis 198

To examine whether faces in the d-AlexNet were processed in an object-like fashion, we compared 199

the within-category representational similarity of faces to that of bowling pins, an ‘unseen’ non-face 200

object category never exposed to either DCNN. Specifically, for each image in the representation 201

dataset, we arranged the average activations of each channel of Conv5 after ReLU into vectors, and 202

then for each pair of images we calculated and then Fisher-z transformed the correlation between 203

their vectors, which served as an index of pairwise representational similarity. Within-category 204

similarity between pairs of face images and that between pairs of object images were calculated 205

separately. A 2 × 2 ANOVA was conducted with visual experience (d-AlexNet vs AlexNet) and 206

category (face vs object) as independent factors. In addition, cross-category similarity between faces 207

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 9: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

7

and bowling pins was also calculated for each DCNN, and a paired sample t-test (two-tailed) on two 208

DCNNs was conducted. 209

2.8 Sparse coding and empirical receptive field 210

To quantify the degree of sparseness of the face-selective channels in representing faces, we 211

submitted the same set of 20 natural images containing faces from FITW to each DCNN, and 212

measured the number of activated units (i.e., the units showing above-zero activation) in the face-213

selective channels. The more non-zero units of the face-selective channels, the less sparse of the 214

representation for faces. The coding sparseness of the two DCNNs was compared with a paired-215

sample t-test. 216

We also calculated the size of the empirical receptive field of the face-selective channels. 217

Specifically, we obtained activation maps of 1000 images randomly chosen from FITW. Using an in-218

house toolbox DNNBrain (Chen et al., 2020), we up-sampled each activation maps to the same size 219

of the input. For each image, we averaged the up-sampled activation within the theoretical receptive 220

field of each unit (the part of the image covered by the convolution of this unit and the preceding 221

computation, decided by the network architecture), and selected the unit with the highest average 222

activation. We then cropped the up-sampled activation map by the theoretical receptive field of this 223

unit, to locate the image part that activated this channel most across all the units. Then, we averaged 224

corresponding cropped activation maps across all the face images, and the resultant map denotes the 225

empirical receptive field of this channel, delineating the part of the theoretical receptive field that 226

causes this channel to respond strongly in viewing its preferred stimuli. 227

228

3 Results 229

The d-AlexNet was trained with a dataset of 662,619 non-face images consisting of 736 non-face 230

categories, generated by removing images containing faces from the ILSVRC 2012 dataset (Figure 231

1a). The d-AlexNet was initialized and trained in the same way as the AlexNet, and the resultant top-232

1 accuracy (57.29%) and the top-5 accuracy (80.11%) were comparable with the pre-trained 233

AlexNet. 234

We first examined the performance of the d-AlexNet in two representative tasks of face 235

processing, face categorization (i.e., differentiating faces from non-face objects) and face 236

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 10: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

8

discrimination (i.e., identifying different individuals). The output of Conv5 after ReLU of the d-237

AlexNet was used to classify objects in the classification dataset. The averaged categorization 238

accuracy of the d-AlexNet (67.40%) was well above the chance level (0.49%), and comparable to 239

that in the AlexNet (68.60%, t (204) = 1.26, p = 0.209, Cohen’s d = 0.007, Figure 1b). Critically, the 240

d-AlexNet, although with no experience on faces, succeeded in the face categorization task, with an 241

accuracy of 86.50% in categorizing faces from non-face objects. Note that the accuracy was 242

numerically smaller than the AlexNet’s accuracy in categorizing faces (93.90%) though (Figure 1c). 243

A similar pattern was observed in the face discrimination task. In this task, the output of Conv5 244

after ReLU of each DCNN was used to identify 33,250 face images into 133 identities in the 245

discrimination dataset. As expected, the AlexNet was capable of face discrimination (65.9%), well 246

above the chance level (0.75%), consistent with previous studies (AbdAlmageed et al., 2016; 247

Grundstrom, Chen, Ljungqvist, & Astrom, 2016). Critically, the d-AlexNet also showed the 248

capability of discriminating faces, with an accuracy of 69.30% that was even significantly higher 249

than that of the AlexNet, t (132) = 3.16, p = .002, Cohen’s d = 0.20, (Figure 1d). Taken together, 250

visual experiences on faces seemed not necessary for developing basic functions of processing faces. 251

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 11: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

9

252

Figure 1. (a) An illustration of the screening to remove images containing faces for the d-AlexNet. 253

The ‘faces’ shown in the figure were AI-generated for illustration purpose only, and therefore have 254

no relation to real person. In the experiment, face images were from the ImageNet, with real persons’ 255

faces. (b) The classification performance across categories of the two DCNNs was comparable. (c) 256

Both DCNNs achieved high accuracy in categorizing faces from other images. (d) Both DCNNs’ 257

performance in discriminating faces was above the chance level, and the d-AlexNet’s accuracy was 258

significantly higher than that of the AlexNet. The error bar denotes standard error. The asterisk 259

denotes statistical significance (α = .05). n.s. denotes no significance. 260

261

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 12: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

10

Was a face module formed in the d-AlexNet to support these functions? To answer this 262

question, we searched all the channels in Conv5 of the d-AlexNet, where face-selective channels 263

have been previously identified in the AlexNet (Baek, Song, Jang, Kim, & Paik, 2019). To do this, 264

we calculated the activation of each channel in Conv5 after ReLU in response to each category of the 265

classification dataset, and then identified channels that showed significantly higher response to faces 266

than non-face images with Mann-Whitney U test (ps < .05, Bonferroni corrected). Two face-selective 267

channels (Ch29 and Ch50) met this criterion in the d-AlexNet (for an example channel, see Figure 268

2a, right), whereas four face-selective channels (Ch195, Ch125, Ch60, and Ch187) were identified in 269

the AlexNet (for an example channel, see Figure 2a, left). The face-selective channels in two DCNNs 270

differed in selectivity. The averaged selective ratio, the ratio of the activation magnitude to faces by 271

that to the most activated non-face object category, was 1.29 (range: 1.22 - 1.36) in the d-AlexNet, 272

much lower than that in the AlexNet (average ratio: 3.63, range: 1.43 - 6.66). The lifetime sparseness, 273

which measures the breadth of tuning of a channel in response to a set of categories, also showed a 274

similar result. The average lifetime sparseness index of the face channels in the AlexNet (mean = 275

0.25, range: 0.11 – 0.51) was smaller than that in the d-AlexNet (mean = 0.71, range: 0.70 – 0.71), 276

indicating higher face selectivity in the AlexNet than that in the d-AlexNet. Taken together, this 277

finding suggested that the face-selective channels already emerged in the d-AlexNet, though the face 278

selectivity was weaker. 279

How did the face-selective channels correspond to face-selective cortical regions in humans, 280

such as the FFA and OFA? To answer this question, we calculated the coefficient of determination 281

(R2) of the multiple regression with the output of the face-selective channels as regressors and the 282

fMRI signals from human visual cortex in response to movies on natural vision as the regressand. As 283

shown in Figure 2b (right), the face-selective channels identified in the d-AlexNet corresponded to 284

the bilateral FFA, OFA, and the posterior superior temporal sulcus face area (pSTS-FA). Similar 285

correspondence was also found with the top two face-selective channels in the AlexNet (Figure 2b, 286

left). Direct visual inspection revealed that the deprivation weakened the correspondence between the 287

face-selective channels and face-selective regions in human brain. This observation was confirmed 288

by the main effect of visual experiences (F (1,798) = 161.97, p < .001, partial ɳ2 = 0.17) in a two-way 289

ANOVA of visual experiences (d-AlexNet vs. AlexNet) by regional correspondence (the OFA versus 290

the FFA). In addition, the main effect of the regional correspondence showed that the response 291

profile of the face-selective channels in the DCNNs fitted better with the activation of the FFA than 292

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 13: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

11

that of the OFA (F (1,798) = 98.69, p = .001, partial ɳ2 = 0.11), suggesting that the face-selective 293

channels in DCNNs may in general prefer to process faces as a whole than face parts. Critically, the 294

two-way interaction was significant (F (1,798) = 84.9, p < .001, partial ɳ2 = 0.10), indicating that the 295

experience affected the correspondence to the FFA and OFA disproportionally. A simple effect 296

analysis revealed that the correspondence to the FFA (MD = 0.023, p < .001) was increased by face-297

specific experiences to a significantly larger extent than that to the OFA (MD = 0.004, p = .013, 298

Figure 2c). Since the FFA is more involved in holistic processing of faces and the OFA is more 299

dedicated to the part-based analysis, the disproportional decrease in correspondence between the 300

face-selective channels in the d-AlexNet and the FFA implied that the role of the experience on faces 301

was to facilitate the processing of faces as a whole. 302

To test this conjecture, we examined how the d-AlexNet responded to inverted faces, a 303

behavioral signature of face-specific processing. As expected, there was a face inversion effect in the 304

AlexNet’s face-selective channels, with the magnitude of the activation to upright faces significantly 305

larger than that to inverted faces (t (19) = 6.45, p < .001, Cohen’s d = 1.44) (Figure 2d). However, no 306

inversion effect was observed in the d-AlexNet, as the magnitude of the activation to upright faces 307

was not significantly larger than that to inverted faces (t (19) = 0.86, p = .40). The lack of the 308

inversion effect in the d-AlexNet was further supported by a two-way interaction of visual experience 309

by orientation of faces, F (1, 19) = 7.79, p = .012, partial ɳ2 = 0.29. That is, unlike the AlexNet, the 310

d-AlexNet processed upright faces in the same fashion as inverted faces. 311

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 14: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

12

312

Figure 2. (a) The category-wise activation profiles of example face-selective channels of the AlexNet 313

(left) and the d-AlexNet (right). The ‘faces’ shown here were AI-generated for illustration purpose 314

only. (b) The R2 maps of the regression with the activation of the d-AlexNet’s (right) or the 315

AlexNet’s face-selective channels (left) as the independent variables. The higher R2 in multiple 316

regression, the better correspondence between the face channels in the DCNNs and the face-selective 317

regions in the human brain. The crimson lines delineate the ROIs of the OFA and the FFA. (c) The 318

face-channels of both DCNNs corresponded better with the FFA than the OFA, and the difference 319

between the AlexNet and the d-AlexNet was larger in the FFA. (d) Face inversion effect. The 320

average activation amplitude of the top two face-selective channels differed in response to upright 321

and inverted faces in the AlexNet but not the d-AlexNet. The error bar denotes standard error. The 322

asterisk denotes statistical significance (α = .05). n.s. denotes no significance. 323

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 15: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

13

324

Previous studies on human suggested that inverted faces are processed in an object-like fashion. 325

That is, it relies more on the parts-based analysis than the holistic processing. Therefore, we 326

speculated that in the d-AlexNet faces were also represented more like non-face objects. To test this 327

speculation, we first compared the representational similarity among responses in Conv5 to faces and 328

bowling-pins, a novel object category that was not exposed to either DCNNs. As expected, the two-329

way interaction of experience (AlexNet versus d-AlexNet) by category (faces versus bowling-pins) 330

was significant (F (1, 6,318) = 4,110.88, p <.001, partial ɳ2 = 0.39), and the simple effect analysis 331

suggested that the representation for faces in the AlexNet was more similar between each other than 332

in the d-AlexNet (MD = 0.16, p <.001), whereas the within-category representation similarity for 333

bowling-pins showed the same but numerically smaller between-DCNN difference (MD = 0.005, p 334

= .002) (Figure 3a). 335

A more critical test was to examine how face-specific experiences made faces being processed 336

differently from objects. Here we calculated between-category similarities between faces and 337

bowling-pins. We found that the between-category similarity between faces and bowling-pins was 338

significantly higher in the d-AlexNet than that in the AlexNet (t (3,159) = 42.42, MD = 0.07, p 339

< .001, Cohen’s d = 0.76) (Figure 3b), suggesting that faces in the d-AlexNet were indeed 340

represented more like objects. In short, although d-AlexNet was able to perform face tasks similar to 341

the one with face-specific experiences, it represented faces in an object-like fashion. 342

Finally, we asked how faceness was achieved in DCNNs with face-specific experiences. 343

Neurophysiological studies on monkeys demonstrate experience-associated sharpening of neural 344

response, with fewer neurons activated after learning. Here we performed a similar analysis by 345

measuring the number of non-zero units (i.e., units with above-zero activation) of the face-selective 346

channels activated by natural images containing faces. As shown in the activation map (Figure 3c), a 347

smaller number of units were activated by faces in the AlexNet than that in the d-AlexNet (t (19) = 348

3.317, MD =15.78, Cohen’s d = 0.74) (Figure 3d), suggesting that the experience on faces made the 349

representation to faces sparser, and thus more effective. Another effect of visual experiences 350

observed in neurophysiological studies is that experiences reduce the size of neurons’ receptive field. 351

Here we also mapped the empirical receptive field of the face-selective channels. Similarly, we found 352

that the empirical receptive field of the AlexNet was smaller than that of the d-AlexNet. That is, 353

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 16: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

14

within the theoretical receptive field, the empirical receptive field of the face-selective channels in 354

the AlexNet was tuned to focus on a smaller region by face-specific experiences (Figure 3e). 355

356

Figure 3. (a) The within-category similarity in the face category and an unseen non-face category 357

(bowling pins) in the DCNNs. (b) The between-category similarity between faces and bowling pins. 358

(c) The activation maps of a typical face-selective channel of each DCNN in responses to natural 359

images containing faces. Each pixel denotes activation in one unit. The images shown here were 360

historical portrait paintings downloaded from the Internet for illustration purpose only, and are 361

different from the images used in this study. (d) The extent of activation of the face-selective 362

channels of each DCNN in responses to natural images containing faces. (e) The empirical receptive 363

fields of the face-selective channels of each DCNN. The error bar denotes standard error. The 364

asterisk denotes statistical significance (α = .05). 365

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 17: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

15

4 Discussion 366

This study presented a DCNN model of selective visual deprivation of faces. We found that without 367

genetic predisposition and face-specific visual experiences, DCNNs were still capable of face 368

perception. In addition, face-selective channels were also present in the d-AlexNet, which 369

corresponded to human face-selective regions. That is, the visual experience of faces was not 370

necessary for an intelligent system to develop a face-selective module. On the other hand, besides the 371

slightly compromised selectivity of the module, the deprivation led the d-AlexNet to process faces in 372

a more parts-based fashion, similar to the way of processing objects. Indeed, face-inversion effect 373

was absent in the d-AlexNet, and the representation of faces was more similar to objects as compared 374

to the AlexNet. Finally, the functionality of face-specific experiences that led the AlexNet to process 375

faces as a whole might be achieved by fine-tuning the sparse coding and the size of the receptive 376

field of the face-selective channels. In sum, our study addressed a long-standing debate on nature 377

versus nurture in developing the face-specific module, and illuminated the role of visual experiences 378

in shaping the module. 379

The observation that without domain-specific visual experience, the face-selective processing 380

and module still emerged in the d-AlexNet seems surprising; yet this finding is consistent with 381

previous studies on non-human primates and new-born human infants (Bushneil, Sai, & Mullin, 382

1989; Goren, Sarty, & Wu, 1975; Morton & Johnson, 1991; Sugita, 2008; Valenza, Simion, Cassia, 383

& Umiltà, 1996), where the face-specific experience is found not necessary for face detection and 384

recognition. However, the experience-independent face processing is largely attributed to either 385

innate face-specific mechanisms (McKone et al., 2012; Morton & Johnson, 1991) or domain-general 386

processing with predisposed biases (Cassia, Turati, & Simion, 2004; Simion & Di Giorgio, 2015; 387

Simion, Macchi Cassia, Turati, & Valenza, 2001). Our study argued against this conjecture, because 388

unlike any biological system, DCNNs have no predefined genetic inheritance or processing biases. 389

Therefore, the face-specific processing observed in DCNNs had to derive from domain-specific 390

factors. 391

We speculated that the face module in the d-AlexNet may result from a tremendous amount of 392

features represented in the multiple layers of the network, with which face-like features were selected 393

to construct face-specific module. In fact, previous studies on DCNNs have shown that DCNN’s 394

lower layers showed sensitivity to myriad visual features similar to primates’ primary visual cortex 395

(Krizhevsky et al., 2012), while the higher layers are tuned to complex features resembling those 396

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 18: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

16

represented in the ventral visual pathway (Güçlü & van Gerven, 2015; Khaligh-Razavi & 397

Kriegeskorte, 2014; Pospisil, Pasupathy, & Bair, 2018; Yamins et al., 2014). With such repertoire of 398

rich features, a representational space for faces, or for any natural object, may be constructed by 399

selecting face-like features and features that are potentially useful in a variety of face tasks. 400

Supporting evidence for this conjecture came from the observation that the d-AlexNet processed 401

faces in an object-like fashion. For example, the face inversion effect, a behavioral signature of face-402

specific processing in human (Kanwisher, Tong, & Nakayama, 1998; Rossion & Gauthier, 2002; 403

Yin, 1969) was absent in the d-AlexNet. That is, similar to inverted faces, upright faces may also be 404

processed like objects in the d-AlexNet. A more direct illustration of the object-like representation of 405

faces came from the analysis on the representational similarity between faces and objects. As 406

compared to the AlexNet, faces in the representational space of the d-AlexNet were less congregated 407

among each other; instead they were more intermingled with non-face object categories. The finding 408

that face representation was no longer qualitatively different from object representation may help 409

explaining the performance of the d-AlexNet. Because faces were less segregated from objects in the 410

representational space, the d-AlexNet’s accuracy of face categorization was worse than that of the 411

AlexNet. In contrast, within the face category, individual faces were less congregated in the 412

representational space; therefore, the discrimination of individual faces became easier instead, 413

suggested by the slightly higher face discrimination accuracy in the d-AlexNet than the AlexNet. In 414

short, when the representational space of the d-AlexNet was formed exclusively based on features 415

from non-face stimuli, faces were represented no longer qualitatively different from non-face objects, 416

which inevitably led to ‘object-like’ face processing. 417

The face-specific processing is likely achieved through prior exposure to faces. At first glance, 418

the effect of face-specific experiences seemed quantitative, as in the AlexNet, both the selectivity to 419

faces and the number of the face-selective channels were increased, and the correspondence between 420

the face-selective channels and the face-selective regions in human brain was tighter. However, 421

careful scrutiny of the difference between the two DCNNs revealed that the changes led by the 422

experience may be qualitative. For example, the deprivation of visual experiences disproportionally 423

weakened the DCNN-brain correspondence in the FFA as comparing to the OFA, and the FFA is 424

engaged more in the configural processing and the OFA in parts-based analysis (Liu, Harris, & 425

Kanwisher, 2010; Nichols, Betts, & Wilson, 2010; Zhao et al., 2014). Therefore, the ‘face-like’ face 426

processing may come from the fact that face-specific experiences led the representation of faces more 427

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 19: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

17

congregated within face category and more separable from the representation of non-face objects 428

stimuli (see also Gomez, Barnett, & Grill-Spector, 2019). In this way, a relative encapsulated 429

representation may help developing a unique way of processing faces, qualitatively different from 430

non-face objects. 431

The advantage of the computational transparency of DCNNs may shed light on the 432

development of domain specificity of the face module. First, we found that face-specific experiences 433

increased the sparseness of face representation, as fewer units of the face channels were activated by 434

faces in the AlexNet. The experience-dependent sparse coding has been widely discovered in the 435

visual cortex, such as the V4, MT, and IT (for reviews, see Desimone, 1996; Grill-Spector, Henson, 436

& Martin, 2006; Wiggs & Martin, 1998). The experience-induced increase of sparseness is thought to 437

reflect a preference-narrowing process that tunes neurons to a smaller range of stimuli (Kohn & 438

Movshon, 2004); therefore, with sparse coding faces are less likely to be intermingled with non-face 439

objects, which may lead to more congregated representations in the representational space in the 440

AlexNet, as compared to the d-AlexNet. Second, we found that the empirical receptive field of the 441

face channel in the AlexNet was smaller than that in the d-AlexNet, suggesting that the visual 442

experience on faces decreased the size of the receptive field of the face channels. This finding fits 443

perfectly with neurophysiological studies that the size of receptive fields of visual neurons is reduced 444

after eye-opening (Braastad & Heggelund, 1985; Cantrell, Cang, Troy, & Liu, 2010; Koehler, 445

Akimov, & Renteria, 2011; Tavazoie & Reid, 2000). Importantly, along with the refined receptive 446

fields, the selectivity of neurons increases (Spilmann, 2014), possibly because neurons can avoid 447

distracting information by focusing on a more restricted part of stimuli, which may further allowed 448

finer representation of the selected regions. This is especially important for processing faces because 449

faces are highly homogeneous, and some information is identical across faces, such as parts 450

composition (eyes, noses, and mouth) and their configural arrangements. Therefore, the reduced 451

receptive field of the face channels may facilitate selective analyses of discriminative face features 452

while avoiding irrelevant information. Further, the sharpening of the receptive field and the fine-453

tuned selectivity may result in superior discrimination ability on faces, and allow faces to be 454

processed at the sub-ordinate level (i.e., identification), whereas the rest of objects are largely 455

processed at the basic level (i.e., categorization). 456

It has long been assumed that domain-specific visual experiences and inheritance are the pre-457

requisites in the development of the face module. In our study with DCNNs as a model, we 458

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 20: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

18

completely decoupled the genetic predisposition and face-specific visual experiences, and found that 459

the representation for faces can be constructed with features from non-face objects to realize basic 460

functions for face recognition. Therefore, in many situations, the difference between faces and 461

objects is ‘quantitative’ rather than ‘qualitative’, as they are represented in a continuum of the 462

representational space. In addition, we also found that face-specific experiences likely fine-tuned the 463

face representation, and thus transformed the ‘object-like’ face processing into ‘face-specific’ 464

processing. However, we shall be cautious that our finding may not be applicable for the 465

development of face module in human, as in the biological brain experience-induced changes are 466

partly attributed to the inhibition from lateral connections (Grill-Spector et al., 2006; Norman & 467

O'Reilly, 2003), whereas there is no lateral or feedback connection in DCNNs. However, despite 468

structural differences, recent studies have shown similar representation for faces between DCNNs 469

and humans (Song, Qu, Xu, & Liu, 2020), suggesting that a common mechanism may be shared by 470

both artificial and biological intelligent systems. Future studies are needed to examine the 471

applicability of our finding to humans. On the other hand, our study illustrated the advantages of 472

using DCNNs as a model to understand human mind because of its computational transparency and 473

its dissociation of factors in nature and nurture. Thus, our study invites future studies with DCNNs to 474

understand the development of domain specificity in particular and a broad range of cognitive 475

modules in general. 476

477

5 Conflict of Interest 478

The authors declare no competing interests. 479

6 Author Contributions 480

J. L. conceived and designed the study. Y.Z. analyzed the data with input from all authors. S.X. wrote the 481

manuscript with input from J. L., Y.Z. and Z. Z. 482

7 Funding 483

This study was funded by the National Natural Science Foundation of China (31861143039, 31771251, 484

and 31600925), the Fundamental Research Funds for the Central Universities, and the National Basic 485

Research Program of China (2018YFC0810602). 486

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 21: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

19

8 Data Availability Statement 487

The datasets generated during and/or analysed during the current study are available from the 488

corresponding author on reasonable request. 489

9 Reference 490

AbdAlmageed, W., Wu, Y., Rawls, S., Harel, S., Hassner, T., Masi, I., et al. (2016). Face 491 Recognition Using Deep Multi-Pose Representations. In 2016 Ieee Winter Conference on 492 Applications of Computer Vision. 493

Arcaro, M. J., Schade, P. F., Vincent, J. L., Ponce, C. R., & Livingstone, M. S. (2017). Seeing faces 494 is necessary for face-domain formation. Nature Neuroscience, 20(10), 1404-+. 495

Baek, S., Song, M., Jang, J., Kim, G., & Paik, S.-B. (2019). Spontaneous generation of face 496 recognition in untrained deep neural networks. bioRxiv, 857466. 497

Berg, T. L., Berg, A. C., Edwards, J., & Forsyth, D. A. (2005). Who's in the picture. Paper presented 498 at the Advances in neural information processing systems. 499

Braastad, B. O., & Heggelund, P. (1985). Development of spatial receptive-field organization and 500 orientation selectivity in kitten striate cortex. Journal of Neurophysiology, 53(5), 1158-1178. 501

Bushneil, I., Sai, F., & Mullin, J. (1989). Neonatal recognition of the mother's face. British Journal of 502 Developmental Psychology, 7(1), 3-15. 503

Cantrell, D. R., Cang, J., Troy, J. B., & Liu, X. (2010). Non-Centered Spike-Triggered Covariance 504 Analysis Reveals Neurotrophin-3 as a Developmental Regulator of Receptive Field Properties 505 of ON-OFF Retinal Ganglion Cells. Plos Computational Biology, 6(10). 506

Cassia, V. M., Turati, C., & Simion, F. (2004). Can a nonspecific bias toward top-heavy patterns 507 explain newborns' face preference? Psychological Science, 15(6), 379-383. 508

Chen, X., Zhou, M., Gong, Z., Xu, W., Liu, X., Huang, T., et al. (2020). DNNBrain: a unifying 509 toolbox for mapping deep neural networks and brains. bioRxiv, 2020.2007.2005.188847. 510

Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale 511 hierarchical image database. Paper presented at the 2009 IEEE conference on computer 512 vision and pattern recognition. 513

Desimone, R. (1996). Neural mechanisms for visual memory and their role in attention. Proceedings 514 of the National Academy of Sciences of the United States of America, 93(24), 13494-13499. 515

Freiwald, W., Duchaine, B., & Yovel, G. (2016). Face Processing Systems: From Neurons to Real-516 World Social Perception. In S. E. Hyman (Ed.), Annual Review of Neuroscience, Vol 39 (Vol. 517 39, pp. 325-346). 518

Gomez, J., Barnett, M., & Grill-Spector, K. (2019). Extensive childhood experience with Pokemon 519 suggests eccentricity drives organization of visual cortex. Nature Human Behaviour, 3(6), 520 611-624. 521

Goren, C. C., Sarty, M., & Wu, P. Y. (1975). Visual following and pattern discrimination of face-like 522 stimuli by newborn infants. Pediatrics, 56(4), 544-549. 523

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 22: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

20

Griffin, G., Holub, A., & Perona, P. (2007). Caltech-256 object category dataset. 524 Grill-Spector, K., Henson, R., & Martin, A. (2006). Repetition and the brain: neural models of 525

stimulus-specific effects. Trends in Cognitive Sciences, 10(1), 14-23. 526 Grundstrom, J., Chen, J., Ljungqvist, M. G., & Astrom, K. (2016). Transferring and Compressing 527

Convolutional Neural Networks for Face Representations. In A. Campilho & F. Karray 528 (Eds.), Image Analysis and Recognition (Vol. 9730, pp. 20-29). 529

Güçlü, U., & van Gerven, M. A. (2015). Deep neural networks reveal a gradient in the complexity of 530 neural representations across the ventral stream. Journal of Neuroscience, 35(27), 10005-531 10014. 532

Kanwisher, N., Tong, F., & Nakayama, K. (1998). The effect of face inversion on the human 533 fusiform face area. Cognition, 68(1), B1-B11. 534

Kanwisher, N., & Yovel, G. (2006). The fusiform face area: a cortical region specialized for the 535 perception of faces. Philosophical Transactions of the Royal Society B-Biological Sciences, 536 361(1476), 2109-2128. 537

Khaligh-Razavi, S.-M., & Kriegeskorte, N. (2014). Deep supervised, but not unsupervised, models 538 may explain IT cortical representation. PLoS computational biology, 10(11), e1003915. 539

Koehler, C. L., Akimov, N. P., & Renteria, R. C. (2011). Receptive field center size decreases and 540 firing properties mature in ON and OFF retinal ganglion cells after eye opening in the mouse. 541 Journal of Neurophysiology, 106(2), 895-904. 542

Kohn, A., & Movshon, J. A. (2004). Adaptation changes the direction tuning of macaque MT 543 neurons. Nature Neuroscience, 7(7), 764-772. 544

Kriegeskorte, N. (2015). Deep neural networks: a new framework for modeling biological vision and 545 brain information processing. Annual review of vision science, 1, 417-446. 546

Krizhevsky, A. (2014). One weird trick for parallelizing convolutional neural networks. arXiv 547 preprint arXiv:1404.5997. 548

Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep 549 convolutional neural networks. Paper presented at the Advances in neural information 550 processing systems. 551

Liu, J., Harris, A., & Kanwisher, N. (2010). Perception of Face Parts and Face Configurations: An 552 fMRI Study. Journal of Cognitive Neuroscience, 22(1), 203-211. 553

McKone, E., Crookes, K., Jeffery, L., & Dilks, D. D. (2012). A critical review of the development of 554 face recognition: Experience is less important than previously believed. Cognitive 555 Neuropsychology, 29(1-2), 174-212. 556

Morton, J., & Johnson, M. H. (1991). CONSPEC and CONLERN: a two-process theory of infant 557 face recognition. Psychological Review, 98(2), 164. 558

Nichols, D. F., Betts, L. R., & Wilson, H. R. (2010). Decoding of faces and face components in face-559 sensitive human visual cortex. Frontiers in Psychology, 1. 560

Norman, K. A., & O'Reilly, R. C. (2003). Modeling hippocampal and neocortical contributions to 561 recognition memory: A complementary-learning-systems approach. Psychological Review, 562 110(4), 611-646. 563

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 23: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

21

Parkhi, O. M., Vedaldi, A., & Zisserman, A. (2015). Deep face recognition. Paper presented at the 564 bmvc. 565

Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., et al. (2017). Automatic 566 differentiation in pytorch. 567

Pospisil, D. A., Pasupathy, A., & Bair, W. (2018). 'Artiphysiology'reveals V4-like shape tuning in a 568 deep network trained for image classification. Elife, 7, e38242. 569

Rossion, B., & Gauthier, I. (2002). How does the brain process upright and inverted faces? 570 Behavioral and cognitive neuroscience reviews, 1(1), 63-75. 571

Simion, F., & Di Giorgio, E. (2015). Face perception and processing in early infancy: inborn 572 predispositions and developmental changes. Frontiers in Psychology, 6. 573

Simion, F., Macchi Cassia, V., Turati, C., & Valenza, E. (2001). The origins of face perception: 574 specific versus non‐specific mechanisms. Infant and Child Development: An International 575 Journal of Research and Practice, 10(1‐2), 59-65. 576

Song, Y., Qu, Y., Xu, S., & Liu, J. (2020). Implementation-independent representation for deep 577 convolutional neural networks and humans in processing faces. bioRxiv. 578

Spilmann, L. (2014). Receptive fields of visual neurons: The early years. Perception, 43(11), 1145-579 1176. 580

Sugita, Y. (2008). Face perception in monkeys reared with no exposure to faces. Proceedings of the 581 National Academy of Sciences of the United States of America, 105(1), 394-398. 582

Tavazoie, S. F., & Reid, R. C. (2000). Diverse receptive fields in the lateral geniculate nucleus during 583 thalamocortical development. Nature Neuroscience, 3(6), 608-616. 584

Valenza, E., Simion, F., Cassia, V. M., & Umiltà, C. (1996). Face preference at birth. Journal of 585 experimental psychology: Human Perception and Performance, 22(4), 892. 586

VanRullen, R., & Reddy, L. (2019). Reconstructing faces from fMRI patterns using deep generative 587 neuralnetworks. Communications biology, 2(1), 193-193. 588

Wen, H., Shi, J., Zhang, Y., Lu, K.-H., Cao, J., & Liu, Z. (2017). Neural encoding and decoding with 589 deep learning for dynamic natural vision. Cerebral Cortex, 28(12), 4136-4160. 590

Wiggs, C. L., & Martin, A. (1998). Properties and mechanisms of perceptual priming. Current 591 Opinion in Neurobiology, 8(2), 227-233. 592

Wilmer, J. B., Germine, L., Chabris, C. F., Chatterjee, G., Williams, M., Loken, E., et al. (2010). 593 Human face recognition ability is specific and highly heritable. Proceedings of the National 594 Academy of Sciences of the United States of America, 107(11), 5238-5241. 595

Yamins, D. L., Hong, H., Cadieu, C. F., Solomon, E. A., Seibert, D., & DiCarlo, J. J. (2014). 596 Performance-optimized hierarchical models predict neural responses in higher visual cortex. 597 Proceedings of the National Academy of Sciences, 111(23), 8619-8624. 598

Yi, D., Lei, Z., Liao, S., & Li, S. Z. (2014). Learning face representation from scratch. arXiv preprint 599 arXiv:1411.7923. 600

Yin, R. K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81(1), 141-601 &. 602

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint

Page 24: The face module emerges from domain-general visual experience: … · 2020-07-06 · 14 critical because it examines the role of experiences in the formation of domain-specific modules

22

Zhao, M., Cheung, S.-h., Wong, A. C. N., Rhodes, G., Chan, E. K. S., Chan, W. W. L., et al. (2014). 603 Processing of configural and componential information in face-selective cortical areas. 604 Cognitive Neuroscience, 5(3-4), 160-167. 605

Zhen, Z., Yang, Z., Huang, L., Kong, X.-z., Wang, X., Dang, X., et al. (2015). Quantifying 606 interindividual variability and asymmetry of face-selective regions: a probabilistic functional 607 atlas. Neuroimage, 113, 13-25. 608

Zhu, Q., Song, Y., Hu, S., Li, X., Tian, M., Zhen, Z., et al. (2010). Heritability of the Specific 609 Cognitive Ability of Face Perception. Current Biology, 20(2), 137-142. 610

611

612

was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (whichthis version posted July 16, 2020. . https://doi.org/10.1101/2020.07.06.189407doi: bioRxiv preprint