Journal of Vision - Zhejiang...

15
Recent articles Current Issue Gaze capture by eye-of-origin singletons: Interdependence with awareness Li Zhaoping StarTrek Illusion - General object constancy phenomenon? Jiehui Qian and Yury Petrov Pattern masking: The importance of remote spatial frequencies and their phase alignment Pi-Chun Huang, Goro Maehara, Keith A. May, and Robert F. Hess Exposure in central vision facilitates view-invariant face recognition in the periphery Bronson Harry, Chris Davis, and Jeesun Kim The expressions of strangers: Our identity-independent representation of facial expression Andrew L. Skinner and Christopher P. Benton Special 10th Anniversary Issue Visual search: A retrospective Miguel P. Eckstein Peripheral vision and pattern recognition: A review Hans Strasburger, Ingo Rentschler, and Martin Juttner Spikes, BOLD, Attention, and Awareness: A comparison of electrophysiological and fMRI signals in V1 Geoffrey M. Boynton More... Journal of Vision is a free access journal. Permissions Home About Current Issue Back Issues For Authors Editorial Board Feedback Institution: Zhejiang University Library Article Statistics Most-read articles Most-cited articles Special Issues Current Previous Sign up for alerts Receive notifications of new articles and issues Advanced » Journal of Vision http://www.journalofvision.org/ 第1页 共2页 2012-2-21 13:41

Transcript of Journal of Vision - Zhejiang...

Recent articles

Current IssueGaze capture by eye-of-origin singletons: Interdependence with awarenessLi Zhaoping

StarTrek Illusion - General object constancy phenomenon?Jiehui Qian and Yury Petrov

Pattern masking: The importance of remote spatial frequencies and their phase alignmentPi-Chun Huang, Goro Maehara, Keith A. May, and Robert F. Hess

Exposure in central vision facilitates view-invariant face recognition in the peripheryBronson Harry, Chris Davis, and Jeesun Kim

The expressions of strangers: Our identity-independent representation of facial expressionAndrew L. Skinner and Christopher P. Benton

Special 10th Anniversary IssueVisual search: A retrospectiveMiguel P. Eckstein

Peripheral vision and pattern recognition: A reviewHans Strasburger, Ingo Rentschler, and Martin Juttner

Spikes, BOLD, Attention, and Awareness: A comparison of electrophysiological and fMRIsignals in V1Geoffrey M. Boynton

More...

Journal of Vision is a free access journal. Permissions

Home About Current Issue Back Issues For Authors Editorial Board Feedback

Institution: Zhejiang University Library

Article Statistics

Most-read articlesMost-cited articles

Special Issues

CurrentPrevious

Sign up for alerts

Receive notifications ofnew articles and issues

Advanced »

Journal of Vision http://www.journalofvision.org/

第1页 共2页 2012-2-21 13:41

Table of ContentsVolume 9, Number 7, 2009

Clear Get All Checked Abstracts

The nonlinear structure of motion perception duringsmooth eye movementsCamille Morvan and Mark WexlerJ Vis July 9, 2009 9(7): 1; doi:10.1167/9.7.1

Abstract Full Text Full Text (PDF)Supplementary File

Binocular motor coordination during saccades andfixations while reading: A magnitude and timeanalysisMarine Vernet and Zoï KapoulaJ Vis July 9, 2009 9(7): 2; doi:10.1167/9.7.2

Abstract Full Text Full Text (PDF)

Exploring the spatiotemporal properties of fractalrotation perceptionSarah Lagacé-Nadon, Rémy Allard, and Jocelyn FaubertJ Vis July 9, 2009 9(7): 3; doi:10.1167/9.7.3

Abstract Full Text Full Text (PDF)

Quantifying center bias of observers in free viewing ofdynamic natural scenesPo-He Tseng, Ran Carmi, Ian G. M. Cameron,Douglas P. Munoz, and Laurent IttiJ Vis July 9, 2009 9(7): 4; doi:10.1167/9.7.4

Abstract Full Text Full Text (PDF)Supplementary File

Perceived timing of new objects and feature changesRyota Kanai, Thomas A. Carlson, Frans A. J. Verstraten,and Vincent WalshJ Vis July 9, 2009 9(7): 5; doi:10.1167/9.7.5

Abstract Full Text Full Text (PDF)

Learning illumination- and orientation-invariantrepresentations of objects through temporalassociationGuy Wallis, Benjamin T. Backus, Michael Langer,Gesche Huebner, and Heinrich BülthoffJ Vis July 10, 2009 9(7): 6; doi:10.1167/9.7.6

Abstract Full Text Full Text (PDF)

Cue dynamics underlying rapid detection of animals innatural scenesJames H. Elder and Ljiljana VelisavljevićJ Vis July 10, 2009 9(7): 7; doi:10.1167/9.7.7

Abstract Full Text Full Text (PDF)

Assessing direction-specific adaptation using thesteady-state visual evoked potential: Results from EEGsource imagingJustin M. Ales and Anthony M. NorciaJ Vis July 14, 2009 9(7): 8; doi:10.1167/9.7.8

Abstract Full Text Full Text (PDF)

Effects of target enhancement and distractorsuppression on multiple object tracking capacityKatherine C. Bettencourt and David C. SomersJ Vis July 14, 2009 9(7): 9; doi:10.1167/9.7.9

Abstract Full Text Full Text (PDF)Supplementary File

Pupil dynamics during bistable motion perceptionJean-Michel Hupé, Cédric Lamirel, and Jean LorenceauJ Vis July 15, 2009 9(7): 10; doi:10.1167/9.7.10

Abstract Full Text Full Text (PDF)

Higher-order aberrations produce orientation-specificnotches in the defocused contrast sensitivity functionHumza J. Tahir, Neil R. A. Parry, Aristophanis Pallikaris,

Table of Contents — July 9, 2009, 9 (7) http://www.journalofvision.org/content/9/7.toc

第1页 共2页 2012-2-21 13:38

and Ian J. MurrayJ Vis July 16, 2009 9(7): 11; doi:10.1167/9.7.11

Abstract Full Text Full Text (PDF)

Experimental validation of a Bayesian model of visualacuityEugénie Dalimier, Eliseo Pailos, Ricardo Rivera,and Rafael NavarroJ Vis July 21, 2009 9(7): 12; doi:10.1167/9.7.12

Abstract Full Text Full Text (PDF)

Spatial contrast sensitivity and grating acuity of barnowlsWolf M. Harmening, Petra Nikolay, Julius Orlowski,and Hermann WagnerJ Vis July 22, 2009 9(7): 13; doi:10.1167/9.7.13

Abstract Full Text Full Text (PDF)

Perceived duration of visual motion increases withspeedSae Kaneko and Ikuya MurakamiJ Vis July 22, 2009 9(7): 14; doi:10.1167/9.7.14

Abstract Full Text Full Text (PDF)

A neurophysiologically plausible population codemodel for human contrast discriminationRobbe L. T. Goris, Felix A. Wichmann, and G. Bruce HenningJ Vis July 31, 2009 9(7): 15; doi:10.1167/9.7.15

Abstract Full Text Full Text (PDF)

Learning to attend: Effects of practice on informationselectionTodd A. Kelley and Steven YantisJ Vis July 31, 2009 9(7): 16; doi:10.1167/9.7.16

Abstract Full Text Full Text (PDF)

Storing fine detailed information in visual workingmemory—Evidence from event-related potentialsZaifeng Gao, Jie Li, Junying Liang, Hui Chen, Jun Yin,and Mowei ShenJ Vis July 31, 2009 9(7): 17; doi:10.1167/9.7.17

Abstract Full Text Full Text (PDF)

Age-related decline of contrast sensitivity forsecond-order stimuli: Earlier onset, but slowerprogression, than for first-order stimuliYong Tang and Yifeng ZhouJ Vis July 31, 2009 9(7): 18; doi:10.1167/9.7.18

Abstract Full Text Full Text (PDF)

Clear Get All Checked Abstracts

Table of Contents — July 9, 2009, 9 (7) http://www.journalofvision.org/content/9/7.toc

第2页 共2页 2012-2-21 13:38

Gaozai
Rectangle

Storing fine detailed information in visual workingmemory—Evidence from event-related potentials

Department of Psychology and Behavioral Sciences,Zhejiang University, Hangzhou, ChinaZaifeng Gao

Department of Psychology and Behavioral Sciences,Zhejiang University, Hangzhou, ChinaJie Li

Department of Psychology and Behavioral Sciences,Zhejiang University, Hangzhou, ChinaJunying Liang

Department of Psychology and Behavioral Sciences,Zhejiang University, Hangzhou, ChinaHui Chen

Department of Psychology and Behavioral Sciences,Zhejiang University, Hangzhou, ChinaJun Yin

Department of Psychology and Behavioral Sciences,Zhejiang University, Hangzhou, ChinaMowei Shen

Visual working memory (VWM) maintains and manipulates a limited set of visual objects being actively used in visualprocessing. To explore whether and how the fine detailed information is stored in VWM, four experiments have beenconducted while recording the contralateral delay activity (CDA), an event-related potential difference wave that reflects theinformation maintenance in VWM. The type of the remembered information was manipulated by adopting simple objectsand complex objects as materials. We found the amplitude of CDA was modulated by object complexity: as the set size ofmemory array rose from 2 to 4, the amplitude of CDA stopped increasing for maintaining complex objects with detailedinformation, while continued increasing for storing highly discriminable simple objects. These results suggest that VWM canstore the fine detailed information; however it can not store all the fine detailed information from 4 complex objects. It impliesthat the capacity of VWM is not only characterized by a fixed number of objects, there is at least one stage influenced by thedetailed information contained in the objects. These results are further discussed within a two-stage storing model of VWM:different types of perceptual information (highly discriminable features and fine detailed features) are maintained in VWM viatwo distinctive mechanisms.

Keywords: visual working memory, capacity, detailed information

Citation: Gao, Z., Li, J., Liang, J., Chen, H., Yin, J., & Shen, M. (2009). Storing fine detailed information in visual workingmemory—Evidence from event-related potentials. Journal of Vision, 9(7):17, 1–12, http://journalofvision.org/9/7/17/,doi:10.1167/9.7.17.

Introduction

Visual working memory (VWM) is a critical componentof visual information processing (Baddeley, 1992).Though its capacity is highly limited, it allows us tointegrate information presented in different eye fixationsto get a coherent perception of the visual scene, as well ascompare current information with information alreadystored in memory (Hollingworth, Richard, & Luck, 2008;Irwin & Andrews, 1996). Due to its significance, muchresearch has been elicited to explore the VWM capacitylimit over the past decade (see Jiang, Makovski, & Shim,2009; Jonides et al., 2008, for a review).

Currently, two contrasting views on VWM capacity havebeen proposed. The first considers that the capacity ofVWM is limited by the number of visual objects, andapproximately 3–4 object representations can be accuratelymaintained, independent of the number of features withinan object and the complexity of objects. For example, Luckand Vogel (1997) found that the number of objects VWMcan hold is equivalent between objects defined by a singlefeature (e.g., color, orientation) and multi-feature objects(e.g., tilted bars with color and orientation). The secondargues that VWM capacity is a flexible while limitedresource (Alvarez & Cavanagh, 2004, 2008; Bays &Husain, 2008), in which the number of objects that can bestored is reduced as object complexity or information load

Journal of Vision (2009) 9(7):17, 1–12 http://journalofvision.org/9/7/17/ 1

doi: 10 .1167 /9 .7 .17 Received November 21, 2008; published July 31, 2009 ISSN 1534-7362 * ARVO

increases (e.g., Alvarez & Cavanagh, 2004, 2008; Eng,Chen, & Jiang, 2005). In one of their studies, Alvarez andCavanagh (2004) used several different categories ofcomplex shapes as research materials, which are difficultto search among distracters of the same category. Theysuggested that the search rate could be an index of visualcomplexity or information load per object, since the morevisual information that must be analyzed per object, theslower the processing rate. They discovered that capacityestimates decrease monotonically as complexity increases.However, in a following research, Awh, Barton, and Vogel(2007) posited that in Alvarez’s research, the poor changedetection performance for more complex items was due tothe high memory-test similarity which made the compar-ison between the representations stored in memory and theitems presented in the test array much more difficult. Afterreducing the memory-test similarity by changing thestimuli into a different category under the test changecondition, they found no statistical difference existing forcapacity estimates of VWM between simple and complexobjects. Thus, they concluded that VWM represents afixed number of objects regardless of complexity.So far, Awh and his colleagues (2007, 2008) have pro-

vided convincing evidence supporting the fixed-number-storage view, even for complex objects. Notably, Awh et al.also suggested that those complex object representationshave limited resolution in VWM. Jiang, Shim, andMakovski(2008) further claimed that these representations may onlycontain high discriminable features. Apparently, the finedetailed information is not maintained in the limited-resolution representations (see also Most et al., 2001;O’Regan, 1992; Simons & Chabris, 1999; but Hollingworth,2004; Liu & Jiang, 2005). So whether and how the finedetailed information is stored in VWM remains unclear.As known in previous studies, a change detection task

can be decomposed into encoding, maintenance, andcomparison phases (Pessoa, Gutierrez, Bandettini, &Ungerleider, 2002; Todd & Marois, 2004). In Awh et al.’sbehavioral research, they also found that the behavioralperformance dropped when the change signal was subtlein the test array, which is also repeated by other studies(Alvarez & Cavanagh, 2004, 2008; Eng et al., 2005). Awhet al. attributed the complexity effect to the comparisonphase. However, there are actually two possible explan-ations to this result. On one hand, VWM may only consistof a fixed number of objects with low resolution. The dropin performance is purely due to the comparison errors,since the low-resolution object representations are notsufficient for comparisons involving fine detailed infor-mation. On the other hand, it is equally plausible thatbeyond the fixed number of coarse representations, VWMalso contains a limited amount of fine detailed informa-tion, which may also contribute to the comparisonprocess. However, since only a very limited amount offine detailed information can be successfully stored, theperformance can not be as good as when the comparisononly requires low-resolution information.

Since ERPs can provide a measure of stimulus-relatedprocessing with a high temporal resolution, we attemptedto investigate the storage of fine detailed information byrecording event-related potentials (ERPs) during themaintenance phase of object representations in VWM.Specifically, Vogel and Machizawa (2004), Vogel,McCollough, and Machizawa (2005), and Vogel, Woodman,and Luck (2006) found that there is a waveform of theevent-related potential with a sustained negative voltageover the hemisphere that is contralateral to the memorizedhemifield and persisted throughout the memory retentioninterval in a VWM task. Importantly, the amplitude of thiscontralateral delay activity (CDA) is higher for 4 simpleobjects than that for 2 simple objects. They explained thatthe amplitude of CDA reflected the maintenance of objectrepresentations in VWM. In this case, we chose CDA asan index of representations for the maintenance phase ofVWM in the present study.In the following four experiments, two kinds of stimuli

with different levels of complexity, i.e., simple objects andcomplex objects, were used as materials. The termcomplexity used here does not refer to the intrinsicphysical property of an object, but refers to the amountof details necessary to perform the task. For the simpleobjects, retaining highly discriminable simple features(i.e., low-resolution information) is sufficient to detect thechange; while for the complex objects, the change signalis subtle, thus storing fine detailed features is necessary todetect the subtle change. Pilot behavioral studies indicatedthat only about 2 complex objects can be maintained inVWM. Therefore, if the fine detailed information can’t bestored in VWM, and VWM capacity is set by a fixednumber of objects regardless of complexity, to anticipate,the amplitude of CDA for 4 objects will be always higherthan that for 2 objects; otherwise, if fine detailedinformation can be stored in VWM, then the amplitudeof CDA may be modulated by the object complexity.

Experiment 1

Previous research showed that the search rate of randompolygons is slow (about 70–80 ms/item) and only 2 ofthem can be remembered (Alvarez & Cavanagh, 2004),indicating they are a kind of complex stimuli. In Experi-ment 1, we chose random polygons (adapted from Alvarez& Cavanagh, 2004) as complex shapes, and basic shapesas simple shapes, to explore the effect of complexity onVWM capacity.

MethodsParticipants

Twelve right-handed students (6 females) from ZhejiangUniversity were paid to participate in this experiment.

Journal of Vision (2009) 9(7):17, 1–12 Gao et al. 2

Participants reported no history of neurological problems,all with normal or corrected-to-normal vision.

Stimuli

Two types of shapes were used (Figure 1): 6 randompolygons and 6 basic shapes. Each object subtended avisual angle of 1.23- � 1.23-. All stimuli were black, andpresented on a gray background.

Experimental design

Participants were seated in an electrically shielded andsound-attenuated recording chamber at a distance of 70 cmfrom a 17-inch monitor. Stimulus arrays were presentedwithin two 4- � 7.3- rectangular areas, centered 3- to theleft and right of a central fixation cross on a graybackground. The memory array consisted of 2 or 4different shapes in each hemifield. Stimulus positionswere randomly selected in each trial, while the distancebetween items within a hemifield was at least 2- (center tocenter). The basic shapes or random polygons were usedas materials in different blocks. The shape of each item inthe memory array was selected in random from the samecategory of shapes without repetition.

Procedure

Each trial began with a 200 ms arrow cue presentedover a fixation point, pointing either to the left or right.After a variable delay, which ranged from 250 to 350 ms,a 500 ms memory array was displayed, followed by a900ms blank period and then, a 2000ms test array (Figure 2).Participants were required to keep their eyes fixated whileto remember the shapes in the hemifield indicated by thearrow cue. One shape in the test array in the cuedhemifield was different from the corresponding shape inthe memory array on 50% of trials; the shapes of the twoarrays were identical on the remaining trials. When ashape changed between the memory and test arrays, a newshape was selected at random from shapes that hadn’tappeared in the memory array. The subject’s task was toindicate whether the memory and test arrays were thesame or different, with the accuracy rather than the speedof the response being stressed. Each of the participants wastested in two sessions: one was the basic shape condition,and the other was the random polygon condition. The two

sessions were counterbalanced in displaying order. Eachsession had 4 blocks, and each trial block lasted about6 minutes with 2-minute break in between. Each subjectperformed 160 trials per set size.

Electrophysiological recording and analyses

The EEGwas recorded from 32 scalp sites using Ag/AgClelectrodes mounted in an elastic cap, with the reference onthe left and right mastoids. Vertical electrooculogram(VEOG) and horizontal electrooculogram (HEOG) wererecorded with two pairs of electrodes, one pair placedabove and below the left eye, and another pair placedbeside the two eyes. All inter-electrode impedance wasmaintained below 5 E4. The EEG and EOG wereamplified by SynAmps (NeuroScan Inc. Sterling, Virginia,USA) using a 0.05–100 Hz bandpass and continuouslysampled at 1000 Hz/channel for off-line analysis.Eye movements and blinks were corrected using an ICA

procedure (Jung et al., 2000). Remaining artifacts exceed-ing T75 2V in amplitude were rejected. Artifact free datawere then segmented into epochs ranging from 200 msbefore to 1400 ms after memory onset for all conditions.Five pairs of electrode sites at posterior parietal, lateraloccipital and posterior temporal areas (P3/P4, CP3/CP4,P7/P8, TP7/TP8, and O1/O2) were chosen for analysis.The contralateral waveforms were computed by averagingthe activity recorded at left hemisphere electrode siteswhen participants were cued to remember the right side ofthe memory array with the activity recorded from the righthemisphere electrode sites when they were cued toremember the left side. CDA was constructed by sub-tracting the ipsilateral activity from the contralateralactivity. The averaged CDA waveforms were smoothedby applying a low-pass filter of 17 Hz (24 dB).Pilot studies showed that the contralateral activity began

to diverge from ipsilateral activity at about 200 ms afterthe memory array onset and persisted during the wholemaintenance period, therefore a measurement window of300–1400 ms after the onset of the memory array wasadopted in the present study. For factors that had morethan two levels, the Greenhouse-Geisser Epsilon was usedto adjust the degrees of freedom.

Figure 1. Objects used in Experiment 1.

Figure 2. Example of a change memory trial for the right hemifieldin Experiment 1.

Journal of Vision (2009) 9(7):17, 1–12 Gao et al. 3

Results and discussionBehavioral data

As shown in Figure 3, the change detection accuracydeclined as object complexity and set size increased. Atwo-way analysis of variance (ANOVA) with the factorsof complexity (simple vs. complex) and set size (2 vs. 4)yielded significant main effects of complexity, F(1,11) =169.24, p G 0.001, and set size, F(1,11) = 382.61, p G0.001, yet no significant interaction of these factors,F(1,11) = 2.414, p = 0.149. Memory capacity under eachset size was estimated with Cowan’s K formula (Cowan,2001) for simple object condition and complex objectcondition, separately.1 Capacity estimates (K) showed that1–2 random polygons could be remembered (K = 1.15 forset size 2, K = 1.05 for set size 4), and 2–3 items for basicshapes (K = 1.74 for set size 2, K = 2.55 for set size 4).

ERP data

Consistent with previous research (McCollough,Machizawa, & Vogel, 2007; Vogel & Machizawa, 2004)and our behavior result, increasing set size from 2 to 4resulted in a substantial increase in the amplitude of CDAin the basic shape condition (Figure 4a). However, theamplitude no longer kept increasing for random polygons(Figure 4b). Taking set size and electrodes as factors, atwo-way ANOVA on the mean amplitude of CDA wasconducted to test this effect for each shape condition. Theresults in the basic shape condition revealed a significantmain effect of set size, F(1,11) = 6.652, p = 0.041,suggesting the amplitude of remembering 4 basic shapeswas higher than that of remembering 2 basic shapes. Themain effect of electrodes was non-significant, F(4,44) =

2.613, p = 0.091, and so was the interaction between thetwo factors, F(4,44) = 0.808, p = 0.488. Importantly, thetwo-way ANOVA in the random polygon condition yieldedno main effect of set size, F(1,11) = 0.771, p = 0.40,indicating that the amplitude of remembering 4 polygonswas no higher than that of remembering 2 polygons.Besides, the main effect of electrodes was significant,F(4,44) = 3.502, p = 0.025, and the interaction betweenthe two factors was non-significant, F(4,44) = 1.719, p =0.205. Hence, the ERP results in Experiment 1 showedthat the amplitude of CDA can be modulated by thecomplexity of materials, suggesting that the fine detailedinformation can be stored in VWM, and the detailedinformation from 2 complex objects has already used upVWM storage resource.

Experiment 2

In Experiment 1, the complexity of memory items wasmanipulated by adopting two different kinds of stimuli,random polygons and basic shapes. Here, we attempted tomanipulate the complexity within objects by modulatingtop-down task requirement. Specifically, the same set ofrandom polygons in Experiment 1 but with different colorswere used as materials, and the participants were asked toremember the different dimension of the objects. Accordingto Luck and Vogel (1997), 3–4 objects can be rememberedFigure 3. Averaged behavioral results in Experiment 1.

Figure 4. Averaged ERP results in the basic shape (a) andrandom polygon condition (b) of Experiment 1.

Journal of Vision (2009) 9(7):17, 1–12 Gao et al. 4

regardless of the complexity of features they contain. Incontrast, according to Alvarez and Cavanagh (2004, 2008),VWM capacity is different for complex features and simplefeatures. Thus, our goal in Experiment 2 was to explorewhether the amplitude of CDA can be modulated by thecomplexity of object features to be encoded.

MethodsParticipants

Twenty-four right-handed students from Zhejiang Uni-versity were paid to participate in this experiment. Eachexperiment session had 12 participants (7 females). Allparticipants reported no history of neurological problems,all with normal or corrected-to-normal vision.

Stimuli

Random polygons in Experiment 1 with different colorswere used as materials. Each polygon had 7 possiblecolors, namely, red, green, blue, violet, yellow, black, andwhite. The color and shape of each item were randomlyselected without repetition for the memory array.

Experimental design

The experimental design was the same as in Experiment 1.

Procedure

All aspects of Experiment 2 were the same as inExperiment 1 except that there were two sessions of trialsand each group of participants only participated in one ofthem. In the color session (i.e., simple feature group), theparticipants only needed to remember the color ofpolygons while the shapes of polygons between thememory array and test array were identical. The othersession was random polygon session (i.e., complex featuregroup), wherein the participants only needed to rememberthe shapes of polygons and the colors between thememory array and test array were identical. In eachsession, participants were instructed to report whether thememory and test arrays were the same or different in thecorresponding target feature dimension.

Electrophysiological recording and analyses

Recording and analyses were the same as in Experiment 1.

Results and discussionBehavioral data

As shown in Figure 5, the accuracy for color wasmuch higher than that for random polygons, t(22) = 11.038,

p G 0.001. In addition, the change detection accuracydeclined as the set size kept increasing from 2 to 4 in bothconditions (color: F(1,11) = 36.250, p G 0.001; randompolygons: F(1,11) = 63.367, p G 0.001). Memory capacityunder each feature condition was estimated, and theresults showed that about 1 random polygon could beremembered (K = 0.86 for set size 2, K = 0.69 for set size 4),while 2–3 for colors (K = 1.83 for set size 2, K = 2.53 forset size 4).

ERP data

The ERP results of Experiment 2 (Figure 6) weresimilar to those of Experiment 1. A two-way ANOVA inthe color condition yielded a significant main effect of setsize, F(1,11) = 5.914, p = 0.033, suggesting the amplitudeof remembering 4 items was higher than that ofremembering 2 items; the main effect of electrodes wasnon-significant, F(4,44) = 2.275, p = 0.120, and theinteraction between set size and electrodes was non-significant, F(4,44) = 1.061, p = 0.359. However, as to therandom polygon condition, the ANOVA found a non-significant main effect of set size, F(1,11) = 0.861, p =0.373, suggesting there was no difference on the ampli-tude of CDA between retaining 2 and 4 polygons. Besides,the main effect of electrodes was marginally significant,F(4,44) = 2.737, p = 0.067, and the interaction was non-significant, F(4,44) = 1.185, p = 0.330. Therefore, bymodulating the top-down task requirement, the ERPresults of the present experiment further implied that thefine detailed information can be stored in VWM. Moreimportantly, even though the features belong to the same

Figure 5. Averaged behavioral results in Experiment 2.

Journal of Vision (2009) 9(7):17, 1–12 Gao et al. 5

set of objects, the VWM resource has been exhausted bythe detailed information from 2 objects, contrast to 4objects in the simple feature condition.

Experiment 3

Experiment 3 had two aims. Firstly, we intended toinvestigate the nature offine detailed information. Secondly,we explored whether the complexity effect found inExperiments 1 and 2 could be extended to other kinds ofstimuli. We hypothesized that the fine detailed informa-tion was the one required serial, attentive perceptualprocessing. To test this hypothesis, we adopted the landoltrings as materials. Though a landolt ring looks like a kind ofsimple object at the first glance, previous research suggestedthat the process of a gap’s orientation of landolt needsfocal attention, with a search rate of about 100 ms/item inthe visual search task (Gao, Shen, Gao, & Li, 2008; Shenet al., 2007; Woodman & Luck, 2003). Therefore,according to the above hypothesis, gap’s orientation is akind of fine detailed information. We predicted that therewould be no increase between the CDA amplitude of4 gap’s orientations and 2 gap’s orientations for retainingfine detailed information.

MethodsParticipants

Fourteen right-handed students (6 females) from ZhejiangUniversity were paid to participate in this experiment. Allparticipants reported no history of neurological problems,all with normal or corrected-to-normal vision.

Stimuli

Eight landolt rings with different colors were used(Figure 7). Each of them subtended a visual angle of1.01- � 1.01-. The same set of colors used in Experiment 2was adopted. There was a gap in each item, whose orientationwas selected from a set of 8 orientations: 0-, 45-, 90-, 135-,180-, 225-, 270-, and 315-. The colors and orientations ofthe rings in the memory array were selected randomly withthe constraint that no more than 2 items shared the samecolor or orientation.

Experimental design

The experimental designwas the same as in Experiment 1.

Procedure

All aspects of Experiment 3 were identical withExperiment 1 but with the following exception. Eachparticipant was tested in two sessions. One was the colorsession, in which the participants only needed to remem-ber the color of landolt rings while the gaps of landoltrings between the memory and test arrays were identical.The other was the orientation session, in which theparticipants only needed to remember the orientations of

Figure 6. Averaged ERP results in the color (a) and randompolygon condition (b) of Experiment 2.

Figure 7. Colored landolt rings with eight possible gap orientationsused in Experiment 3 and red landolt ring with 0- orientation wasillustrated.

Journal of Vision (2009) 9(7):17, 1–12 Gao et al. 6

the gaps and the colors between the memory and testarrays were identical. Here, the complexity level in thefirst session was low while high in the second session.Participants were instructed to report whether the memoryand test arrays were the same or different in thecorresponding target feature dimension. Each session had2 blocks, each trial block lasting about 6 minutes with2 minutes break in between. Each subject performed atleast 84 trials per set size.

Electrophysiological recording and analyses

Recording and analyses were the same as in Experiment 1.

Results and discussionBehavioral data

The accuracy for color was much higher than that fororientation (Figure 8). A two-way ANOVA with factors ofcomplexity (simple vs. complex) and set size (2 vs. 4)found significant main effects of complexity, F(1,13) =27.673, p G 0.001, and set size, F(1,13) = 203.492, p G0.001, yet a non-significant interaction, F(1,13) = 2.181,p = 0.164. Capacity estimates showed that about 1–2 gap’sorientations could be remembered (K = 1.50 for set size 2,K = 1.52 for set size 4), 2–3 for colors (K = 1.76 for setsize 2, K = 2.34 for set size 4).

ERP data

The ERP results of Experiment 3 (Figure 9) replicatedthe response profiles we got in Experiments 1 and 2. Atwo-way ANOVA on the mean amplitude of CDA in the

color condition yielded a significant main effect of setsize, F(1,13) = 9.459, p = 0.009, suggesting the amplitudeof retaining 4 objects was higher than that of retaining2 objects. Besides, the main effect of electrodes wassignificant, F(4,52) = 4.572, p = 0.010, yet there was anon-significant interaction of the two factors, F(4,52) =0.822, p = 0.486. As to the gap’s orientation condition,importantly, the main effect of set size was non-significant,F(1,13) = 1.179, p = 0.297, indicating that there was nodifference on the amplitude of CDA between retaining 2and 4 gap’s orientation. The main effect of electrodeswas significant, F(4,52) = 3.130, p = 0.046, yet theinteraction between the two factors was non-significant,F(4,52) = 0.407, p = 0.738. Therefore, the currentexperiment supported our hypothesis about the nature offine detailed information, which needs focal attention toprocess. Furthermore, the complexity effect that theamplitude of CDA is modulated by object complexitycan be extended to the landolt rings, further supportingour conclusion about the storage of fine detailed informa-tion in VWM.

Experiment 4

Some may argue that the use of blocked trials of simpleand complex objects could lead participants to adoptFigure 8. Averaged behavioral results in Experiment 3.

Figure 9. Averaged ERP results in the color (a) and gap condition(b) of Experiment 3.

Journal of Vision (2009) 9(7):17, 1–12 Gao et al. 7

different strategies in separate blocks. Specifically, as wehave voluntary control over the number of objects wechoose to store in VWM, participants might simplychoose to store 2 objects in the case of the complexobjects, thus leading to the above results that theamplitude of CDA didn’t raise from set size 2 to 4. Totestify this alternative possibility, we intermixed the simpleobjects and complex objects used in Experiment 1 withina single block. In this case, it was less likely that theparticipants would choose to retain a smaller number ofitems for the complex objects condition, because therewere 50% of trials in which participants who retained asmany objects as possible would be rewarded by accuratechange detection. To anticipate, if the strategy indeedinfluenced the number of objects that participants chose tohold in VWM, there would be no difference on theresponse profiles of CDA between simple objects andcomplex objects.

MethodsParticipants

Twelve right-handed students (6 females) from ZhejiangUniversity were paid to participate in this experiment. Allparticipants reported no history of neurological problems,all with normal or corrected-to-normal vision.

Stimuli

The stimuli were the same as in Experiment 1.

Electrophysiological recording and analyses

All sites were recorded with a left-mastoid reference,and the data were re-referenced offline to the algebraicaverage of the left and right mastoids. The other aspectswere the same as in Experiment 1.

Results and discussionBehavioral data

The change detection performance (Figure 10) wassimilar to that in Experiment 1. A two-way ANOVA withthe factor of complexity (simple vs. complex) and set size(2 vs. 4) found significant main effects of complexity,F(1,11) = 121.821, p G 0.001, and set size, F(1,11) =216.514, p G 0.001, yet no significant interaction of thetwo factors, F(1,11) = 1.035, p = 0.331. Capacityestimates (K) showed that only 1 random polygons couldbe remembered (K = 0.98 for set size 2, K = 0.80 for setsize 4), and 2–3 items for basic shapes (K = 1.73 for setsize 2, K = 2.13 for set size 4).

ERP data

Visual inspection of Figure 11 suggested the amplitudeof CDA for 4 basic shapes were higher than that for2 basic shapes; however, the amplitude of CDA for 4

Figure 10. Averaged behavioral results in Experiment 4.

Figure 11. Averaged ERP results in the basic shape (a) andrandom polygon condition (b) of Experiment 4.

Journal of Vision (2009) 9(7):17, 1–12 Gao et al. 8

random polygons was still no higher than that for2 random polygons. To evaluate the significance of setsize, a two-way ANOVA was conducted separately foreach shape condition on the mean amplitude of CDA.They showed a significant main effect of set size in basicshape condition, F(1,11) = 14.976, p = 0.003, but no suchsignificant main effect of set size in the random polygoncondition, F(1,11) = 3.332, p = 0.095, and the amplitudeof CDA for 2 random polygons was even higher than thatfor 4 random polygons in some electrodes, suggesting thecomplexity of object modulated the amplitude of CDA,moreover, VWM can not always store all the informationfrom 4 objects regardless of complexity. The other aspectswere all non-significant, all ps 9 0.1. Overall, the resultsof Experiment 4 suggested that participants didn’t adoptdifferent strategies in the different sessions of trials ofExperiments 1, 2, and 3.

General discussion

VWM holds information that is actively being used incognitive performance. Due to the processing limitation ofbrain system, only a small amount of information can beselected and consolidated into VWM for further process-ing. To avoid the interference of comparison errors on theestimation of VWM capacity (Awh et al., 2007) whileinvestigating the maintenance phase directly, the storagemechanism of VWM for the fine detailed information wasexplored by taking the amplitude of CDA as an index. OurERP results revealed that for maintaining simple objects,the amplitude of CDA for 4 objects was higher than thatfor 2 objects; however, for maintaining fine detailedinformation, there was no difference. So far as we know,this is the first time that clear ERPs evidence has beenpresented, suggesting that VWM does not merely repre-sent a fixed number of objects, but also is affected by thefine detailed information contained in the memorymaterials.One of the impressive results from Awh et al. (2007) is

that behavioral performance did not drop with the incre-ment of object complexity, as long as the differencebetween memory array and test array was salient. Itsuggests that the comparison between memory and testarrays plays an important role in the complexity effectreported in previous studies (Alvarez & Cavanagh, 2004;Olson & Jiang, 2002; Wheeler & Treisman, 2002; Xu,2002). However, from this result alone, it is still unclearhow the on-line maintenance of object representation isinfluenced by the fine detailed information of each object.The present study took the amplitude of CDA as thedependent measure, which can directly track the main-tenance process itself and exclude any effect from thecomparison phrase. Indeed, on each trial, recording of theCDA signals has been finished before the presentation of

test array. Our findings clearly indicate that the amplitudeof CDA is modulated by fine detailed information, therebyproviding strong evidence for the existence of a stage ofprocessing in VWM, which is affected by fine detailedinformation contained in the objects.To provide a comprehensive interpretation for the effect

of fine detailed information on CDA, it is necessary tobriefly review other recent models of VWM which haveemphasized object complexity. To reconcile the evidencethat on one hand, storage of simple features reveals strongobject-based processing (Awh et al., 2007; Luck & Vogel,1997), on the other hand, the complexity of memory arrayalso significantly influences the performance (Alvarez &Cavanagh, 2004; Olson & Jiang, 2002; Wheeler &Treisman, 2002; Xu, 2002), some researchers suggestedthat storage in VWM is not a unitary process, but consistsof two dissociable stages whose capacities are limitedby different types of visual information. Alvarez andCavanagh (2008) proposed that VWM involves at leasttwo stages of processing. In the first stage, about 4 objects’low resolution boundary features (e.g., the shape of theouter contour in a Gabor patch) are extracted, regardlessof the complexity of each object. Those low-resolutionboundary features serve as the indexing features forretrieving other information. In the second stage ofprocessing, based upon those boundary features, morefine detailed surface features (e.g., the striped texture in aGabor patch) can be encoded and maintained. However,storage of high-resolution surface features in this stage islimited by object complexity: the more surface featuredetails that an object contains, the smaller the number ofobjects that can be stored. Similarly, Gao et al. (2008)suggested that there are dissociated mechanisms in VWMfor storing information extracted at different stages ofperceptual processing. The object-based storage in VWMis not based on the coherent object representationassembled by serial attentive processing; on the contrary,it is originated from highly discriminable simple features,which have already been segmented into individual low-resolution objects at the end of the parallel perceptualprocessing. The output of serial, attentive perceptualprocessing (e.g., color-color conjunction), which is a kindof fine detailed information, can not be added to the objectrepresentations in VWM for free, but in need of extraresources. The two-stage processing hypothesis has alsoreceived support from recent neuroimaging results. Xuand Chun (2006, 2007, 2009) found two dissociable neuralmechanisms mediating VWM: 4 objects at most are firstlyselected and encoded by the inferior intraparietal sulcus(IPS) by their spatial locations; then a subset of theseselected objects are encoded into the high-resolution onesby the superior IPS depending on complexity.All the findings taken together (Alvarez & Cavanagh,

2008; Awh et al., 2007; Xu & Chun, 2006; the currentresearch), the present results confirm that the amplitude ofCDA has substantial connection with VWM capacity.However, it isn’t correlated with the processing of a fixed

Journal of Vision (2009) 9(7):17, 1–12 Gao et al. 9

number of object representations containing only low-resolution boundary features (Alvarez & Cavanagh, 2008)or the low-resolution output of parallel perceptual pro-cessing (Gao et al., 2008). On the contrary, it reflects thesecond stage of storing, i.e., the maintenance of finedetailed information, which is not only influenced by thenumber of objects, but also limited by the encoding levelof information resolution of each object. Accordingly, justas Awh et al. revealed, there is indeed a stage in VWMthat can hold a fixed number of objects, which actuallyreflects the property of the first stage of storing low-resolution information.The main focus of the current research is on the

mechanism for storing fine detailed information. There-fore, we adopted complex objects as materials, since it is awidely adopted and much convenient, intuitive way tomanipulate fine detailed information (e.g., Alvarez &Cavanagh, 2004, 2008; Eng et al., 2005; Makovski &Jiang, 2008). However, further research needs to becarried out to explore the nature of this fine detailedinformation. Firstly, to ensure the fine detailed informa-tion to be remembered, the current research adopted awithin-category change task, while two kinds of complexobjects were tested. It may be possible that the specifictask and stimuli used in the current research lead theparticipants to adopt a kind of strategy, in which onlyabout 2 complex objects were remembered. So thegeneralizability of our conclusion should be further testedby using some other different tasks and complex objects.Secondly, simple objects can also own fine detailedinformation. For example, we need to encode the finedetailed information of the simple objects (e.g., orientatedbars) when the change is subtle (see Jiang et al., 2008 foran example). It is thus an intriguing and theoreticallyimportant question to know the storage mechanism of thefine detailed information for the simple objects. Besides,despite that the current research provides evidencesupporting the hypothesis that in a certain stage ofprocessing, storage in VWM is limited by fine detailedinformation contained in the object, the mechanism ofhow fine detailed information influences storage remainslargely incomplete. For instance, how are the objects in aprevious lower level stage selected into that stage? How isthe object represented in that stage? All these questionsneed further exploration.The current findings about CDA have not only enriched

the theories of VWM, but also provided methodologicalimplications on how to use CDA appropriately as apowerful tool to explore VWM in future. To our knowl-edge, CDA is the only known neural signature of VWMby which one can track the on-line maintenance of visualinformation dynamically (see a review, Drew,McCollough,& Vogel, 2006). In Vogel and Machizawa (2004), lowresolution simple objects were adopted as memorymaterial and the results revealed a strong correlationbetween the amplitude of CDA and the number of objectsheld in VWM. Driven by their results, we had initially

expected that the amplitude of CDA may only correlatewith the number of objects, regardless of object complex-ity. In that case, we could take CDA as an index todiagnose what counts as an ‘object’ in VWM. However,after a series of experiments reported in the currentresearch, we found quite an opposite result: the amplitudeof CDA can be strongly influenced by the encoding levelof information resolution of the memory materials. Thisunexpected result indicates that in future research, whileemploying CDA as a tool to measure the number ofobjects being held in VWM, researchers should becautious to exclude any potential confounding effect fromobject complexity. Nevertheless, it still promises as analternative way in future studies. As discussed above, howthe fine detailed information is processed in VWM is stillfar from clear. Suppose CDA is reliably correlated withthe degree of complexity for the memory materials, onecan thus safely adopt it to uncover the algorithm forcomputing ‘complexity’ in VWM.

Acknowledgments

This research is supported by the National NaturalScience Foundation of China (No. 30870765; No.30570604), Key Project of Humanities and Social Scien-ces, Ministry of Education (No. 07JZD0029), Key Projectof National Social Science Foundation of China (No.07AY001), Fund of the Ministry of Education forDoctoral Programs in Universities of China (No.20060335034), the National Foundation for FosteringTalents of Basic Science (No. J0630760) and the ResearchCenter of Language and Cognition, Zhejiang University.We are grateful to Tao Gao, Edward Awh, Lun Zhao, andan anonymous reviewer for insightful comments. We arealso indebted to Yisheng Dong and Wenjun Yu forassistance with figure production.

Commercial relationships: none.Corresponding author: Mowei Shen.Email: [email protected]: Department of Psychology, Xixi Campus,Zhejiang University, Hangzhou, 310028, P.R. China.

Footnote

1Though K may be underestimated under complex

object condition (Awh et al., 2007), here we calculatedK for the following reasons. Firstly, K is an indexsensitive to the fine detailed information contained in theobject. Secondly, K can be adopted to test the validity ofobject complexity manipulation and to make a directcomparison with previous work.

Journal of Vision (2009) 9(7):17, 1–12 Gao et al. 10

References

Alvarez, G. A., & Cavanagh, P. (2004). The capacity ofvisual short-term memory is set both by visualinformation load and by number of objects. Psycho-logical Science, 15, 106–111. [PubMed]

Alvarez, G. A., & Cavanagh, P. (2008). Visual short-termmemory operates more efficiently on boundary fea-tures than it does on the surface features. Perception& Psychophysics, 70, 346–364. [PubMed] [Article]

Awh, E., Barton, B., & Vogel, E. K. (2007). Visualworking memory represents a fixed number of itemsregardless of complexity. Psychological Science, 18,622–628. [PubMed]

Baddeley, A. (1992). Working memory. Science, 255,556–559. [PubMed]

Bays, P. M., & Husain, M. (2008). Dynamic shifts oflimited working memory resources in human vision.Science, 321, 851–854. [PubMed] [Article]

Cowan, N. (2001). The magical number 4 in short-termmemory: A reconsideration of mental storagecapacity. Behavior and Brain Sciences, 24, 87–185.[PubMed]

Drew, T. W., McCollough, A. W., & Vogel, E. K.(2006). Event-related potential measures of visualworking memory. Clinical EEG and Neuroscience,37, 286–291. [PubMed]

Eng, H. Y., Chen, D., & Jiang, Y. (2005). Visual workingmemory for simple and complex visual stimuli.Psychonomic Bulletin and Review, 12, 1127–1133.[PubMed] [Article]

Gao, T., Shen, M., Gao, Z., & Li, J. (2008). Object-basedstorage in visual working memory and the visualhierarchy. Visual Cognition, 16, 103–106.

Hollingworth, A. (2004). Constructing visual represen-tations of natural scenes: The roles of short- andlong-term visual memory. Journal of ExperimentalPsychology: Human Perception and Performance, 30,519–537. [PubMed]

Hollingworth, A., Richard, A. M., & Luck, S. J. (2008).Understanding the function of visual short-termmemory: Transsaccadic memory, object correspond-ence, and gaze correction. Journal of ExperimentalPsychology: General, 137, 163–181. [PubMed]

Irwin, D. E., & Andrews, R. V. (1996). Integration andaccumulation of information across saccadic eyemovements. In T. Inui & J. L. McClelland (Eds.),Attention and performance XVI: Information integra-tion in perception and communication (pp. 125–155).Cambridge, MA: MIT Press.

Jiang, Y., Shim, W. M., & Makovski, T. (2008). Visualworking memory for line orientations and face identi-

ties. Perception & Psychophysics, 70, 1581–1591.[PubMed] [Article]

Jiang, Y. V., Makovski, T., & Shim, W. M. (2009). Visualmemory for features, conjunctions, objects, andlocations. In J. R. Brockmole (Ed.), The visual worldin memory (pp. 33–65). Hove, UK: Psychology Press.

Jonides, J., Lewis, R. L., Nee, D. E., Lustig, C. A.,Berman, M. G., & Moore, K. S. (2008). The mind andbrain of short-term memory. Annual Review ofPsychology, 59, 193–224. [PubMed]

Jung, T. P., Makeig, S., Westerfield, M., Townsend, J.,Courchesne, E., & Sejnowski, T. J. (2000). Removalof eye activity artifacts from visual event-relatedpotentials in normal and clinical subjects. ClinicalNeurophysiology, 111, 1745–1758. [PubMed]

Liu, K., & Jiang, Y. (2005). Visual working memory forbriefly presented scenes. Journal of Vision, 5(7):5,650–658, ht tp : / / journalofvis ion.org/5/7/5/ ,doi:10.1167/5.7.5. [PubMed] [Article]

Luck, S. J., & Vogel, E. K. (1997). The capacity of visualworking memory for features and conjunctions.Nature, 390, 279–281. [PubMed]

Makovski, T., & Jiang, Y. V. (2008). Indirect assessmentof visual working memory for simple and complexobjects. Memory & Cognition, 36, 1132–1143.[PubMed]

McCollough, A. W., Machizawa, M. G., & Vogel, E. K.(2007). Electrophysiological measures of maintainingrepresentations in visual working memory. Cortex,43, 77–94. [PubMed]

Most, S. B., Simons, D. J., Scholl, B. J., Jimenez, R.,Clifford, E., & Chabris, C. F. (2001). How not to beseen: The contribution of similarity and selectiveignoring to sustained inattentional blindness. Psycho-logical Science, 12, 9–17. [PubMed]

Olson, I. R., & Jiang, Y. (2002). Is visual short-termmemory object based? Rejection of the “StrongObject” hypothesis. Perception & Psychophysics,64, 1055–1067. [PubMed] [Article]

O’Regan, J. K. (1992). Solving the “real” mysteries ofvisual perception: The world as an outside memory.Canadian Journal of Psychology, 46, 461–488.[PubMed]

Pessoa, K., Gutierrez, E., Bandettini, P. A., & Ungerleider,L. G. (2002). Neural correlates of visual workingmemory: fMRI amplitude predicts task performance.Neuron, 35, 975–987. [PubMed]

Scolari, M., Vogel, E. K., & Awh, E. (2008). Perceptualexpertise enhances the resolution but not the numberof representations in working memory. PsychonomicBulletin & Review, 15, 215–222. [PubMed]

Journal of Vision (2009) 9(7):17, 1–12 Gao et al. 11

Shen, M., Li, J., Lang, X., Gao, T., Gao, Z., & Shui, R.(2007). Storage mechanism of objects in visualworking memory (in Chinese). Acta PsychologicaSinica, 39, 761–767.

Simons, D. J., & Chabris, C. F. (1999). Gorillas in ourmidst: Sustained inattentional blindness for dynamicevents. Perception, 28, 1059–1074. [PubMed]

Todd, J. J., & Marois, R. (2004). Capacity limit of visualshort-term memory in human posterior parietalcortex. Nature, 428, 751–754. [PubMed]

Vogel, E. K., & Machizawa, M. G. (2004). Neural activitypredicts individual differences in visual workingmemory capacity. Nature, 428, 748–751. [PubMed]

Vogel, E. K., McCollough, A. W., & Machizawa, M. G.(2005). Neural measures reveal individual differencesin controlling access to working memory. Nature,438, 500–503. [PubMed]

Vogel, E. K., Woodman, G. F., & Luck, S. J. (2006). Thetime course of consolidation in visual working memory.Journal of Experimental Psychology: Human Percep-tion and Performance, 32, 1436–1451. [PubMed]

Wheeler, M. E., & Treisman, A. M. (2002). Binding inshort-term visual memory. Journal of ExperimentalPsychology: General, 131, 48–64. [PubMed]

Woodman, G. F., & Luck, S. J. (2003). Serial deploymentof attention during visual search. Journal of Exper-imental Psychology: Human Perception and Perfor-mance, 29, 121–138. [PubMed]

Xu, Y. (2002). Limitations of object-based feature encod-ing in visual short-term memory. Journal of Exper-imental Psychology: Human Perception andPerformance, 28, 458–468. [PubMed]

Xu, Y., & Chun, M. M. (2006). Dissociable neuralmechanisms supporting visual short-term memoryfor objects. Nature, 440, 91–95. [PubMed]

Xu, Y., & Chun, M. M. (2007). Visual grouping in humanparietal cortex. Proceedings of the National Academyof Sciences of the United States of America, 104,18766–18771. [PubMed] [Article]

Xu, Y., & Chun, M. M. (2009). Selecting and perceivingmultiple visual objects. Trends in Cognitive Sciences,13, 167–174. [PubMed]

Journal of Vision (2009) 9(7):17, 1–12 Gao et al. 12