GDR Vision annual forum, Marseilles - INT · GDR Vision annual forum, Marseilles November 14, 2012...
Transcript of GDR Vision annual forum, Marseilles - INT · GDR Vision annual forum, Marseilles November 14, 2012...
GDR Vision annual forum, Marseilles
November 14, 2012
Institut de Neurosciences de La Timone (INT) Campus Santé La Timone,
27 Bd Jean Moulin, 13005 MARSEILLE
Local organizers: Laurent Madelain and Manuel Vidal
I. FORUM PROGRAM 2
II. ABSTRACTS 3 KEYNOTE LECTURES 3 TALKS 4 POSTERS 8
III. LIST OF PARTICIPANTS 16
IV. VENUE 17
3
II. Abstracts
Keynote lectures
Karine Doré‐Mazars Laboratoire Vision Action Cognition, Institut de Psychologie ‐ Université Paris Descartes
What does the selectivity of saccadic adaptation tell us? Saccades are eye movements so fast that they cannot be corrected during their execution. However, saccade accuracy remains stable throughout lifetime despite many potential sources of internal changes (age or disease) due to adaptive mechanisms. An exponential number of studies investigate the plasticity of the saccadic system using external changes to induce saccadic adaptation. In the classical double‐step paradigm (McLaughlin, 1967), the target steps during saccade execution, ‐ in the same or the opposite direction than the saccade‐, leads to a saccade endpoint undershoot or overshoot relative to the final target position respectively. The repetition of the target step induces a decrease or an increase in the saccade amplitude so that gradually, the eye is landing directly on the final target position. Most of characteristics of saccadic adaptation have been highlighted by testing the transfer of a given adapted saccade to others. For instance, saccadic adaptation is known to be specific to a saccade type, as voluntary saccade adaption does not transfer to reactive ones. Such selectivity suggests that multiple neural sites underlie adaptation. Studies in children seem particularly relevant to extend this issue as cerebral maturation differs between voluntary and reactive saccades. Among the numerous questions raised by the growing number of “unexpected” results about the context specificity of saccadic adaptation, I will focus on the importance of target selection and/or post‐saccadic error. I will present a series of studies showing that spatial information about stimulus localization is not always relevant to induce adaptation. Other information such as stimulus size could lead to saccade amplitude change as well. On the other hand, saccade inaccuracy in patients with occipital lesions does not impede saccadic adaptation. Finally, I will open the discussion on the occurrence of saccadic adaptation without any change in the external world.
Kenneth Knoblauch Stem Cell and Brain Research Institute, Department of Integrative Neurosciences, U846, Bron
Psychophysical Methods for Estimating Supra‐threshold Response Functions Psychophysical experiments are typically based on analyzing observer choices as a function of a chosen stimulus dimension in order to make inferences about the underlying sensory and/or decision processes. Modern psychophysical theory derives from Signal Detection Theory in which the observer’s performance depends on a noise contaminated decision variable that in association with a criterion determines the rates of both successful classifications and errors. The largest body of psychophysical work is based on discrimination of small (threshold) stimulus differences, however, yielding measures of perceptual strength that do not obviously (or easily) extrapolate to predict performance for large (supra‐threshold) differences nor for appearance. Some recent techniques do permit extensions to the supra‐threshold domain, however. For example, Maximum Likelihood Difference Scaling (MLDS) is a psychophysical method and fitting procedure that involves scaling of large stimulus differences based on paired‐comparisons of stimulus intervals. Maximum Likelihood Conjoint Measurement (MLCM) can also be shown to depend on comparing stimulus intervals, but across stimulus dimensions. The resulting scales have interval properties, i.e., equal differences along the scale are perceptually equal. The decision rules underlying discrimination, MLDS and MLCM are all based on the same equal‐variance Gaussian model. I
4
review studies that evaluate the coherence of the scale measures obtained by these three different approaches using as an example the quantification of the strength of the long‐range filling‐in color of the Watercolor effect. I argue that the results support a unified theory of psychophysics that extends from threshold to appearance.
Talks
Benoit Cottereau CerCo UMR5549 & Stanford University
The evolution of a disparity decision in human visual cortex Binocular retinal disparity is a fundamental cue for depth perception. The aim of this work was to identify the human visual areas responsible for determining the timing of behavioral responses in a disparity‐discrimination task. Using a new fMRI‐informed EEG source‐imaging technique (Cottereau et al., In press), I characterized the activity within five visual areas: V1, V4, V3A, Lateral Occipital Complex (LOC) and hMT+. Consistent with my previous studies (Cottereau et al., 2011; Cottereau et al., 2012), decisionrelated activity was found after the stimulus onset within an extended cortical network that included extra‐striate areas V4, V3A, hMT+ and the LOC. By using a novel response‐locked analysis, I was able to determine the time of these activities relative to the subject’s behavioral response. Activity appeared first in area V4 almost 400ms before the button press and subsequently in V3A, LOC and hMT+. All these responses remained significant up to 250ms after the subject’s response. Trial‐by‐trial correlation between behavioral reaction time and the early part of V4’s evoked response demonstrated that this area is directly involved in disparity discrimination. Later activity in extrastriate cortex reflected post‐decision feedback and had a distinctly different time‐course from that of motor cortex. By combining the high spatial resolution of fMRI‐informed EEG source imaging with the ability to sort out neural activity occurring before, during and after the behavioral manifestation of the decision, this study is the first to assign distinct functional roles to the extra‐striate visual areas involved in perceptual decisions based on disparity.
Andrei Gorea1, Jan del Oho Balaguer2 and Christopher Summerfield2 1Laboratoire de Psychologie de la Perception, UMR8158, Paris, 2Department of Experimental Psychology, Wadham College, Oxford University, UK
Computing an average over time Bloch’s law stipulates that, within the temporal integration time for detectiont, tD (but also for brightness) stimulus duration can be traded for intensity, I (or contrast), so that keeping energy (i.e. IxT) constant keeps performance constant. What about integrating higher order parameters such as orientation or shape? We show that the computation of the average shape (closer to “circle” or to “square”) of a population of items presented simultaneously complies with Bloch’s law and it yields the same integration constant as for detection, tD. Moreover, the integration characteristics of such average processing does not depend either on the number of items displayed (4‐12) or on whether the items sample drawn from trial to trial is constrained (to always have a fixed mean and SD) or unconstrained. The main conclusion is that higher order operations such as averaging occur at the detection stage with no further processing. Implications on the number of items simultaneously processed are also discussed.
Jean Lorenceau CRICM, UMR7225, Hôpital Pitiè‐Salpétrière, Paris
Cursive writing with smooth pursuit eye movement Although it is generally believed that it is impossible to voluntarily generate pursuit eye movements without a driving target to track, I’ll describe an experimental set‐up with which
5
participants can sustain pursuit in chosen directions and speeds for long durations. The process underlying this capability requires enabling an action‐perception loop where eye‐movements elicit a perception of reverse‐phi motion flowing in the very direction of the eyes, which in turn feeds the eye movement system. Under this condition, oculo‐motor actions are no more devoted to gathering information for vision, but on the contrary, vision provides a moving substrate to generate eye movements. After a learning phase, needed to perceive, attend, select and control the eye‐induced reverse‐phi motion, participants can voluntarily generate figures, digits letters, words or drawings. In addition to offering a novel communication device, as well as a way of training eye movement control, this paradigm opens new ways to studying eye movements that I’ll briefly discuss.
Sebastiaan Mathôt1,2, Jonathan Grainer1 and Jan Theeuwes2 1Laboratoire de Psychologie Cognitive, Marseille, 2VU University Amsterdam, Department of Cognitive Psychology
World‐centred, object‐centred, and eye‐centred visual attention It is generally believed that visual attention is an eye‐centred (retinotopic) phenomenon, i.e. that the focus of attention is encoded relative to the current position of gaze. In a series of experiments, we tested to what extent the focus of attention is rigidly linked to a retinotopic frame of reference. Our results suggest that the focus of attention is indeed fundamentally retinotopic, but can be updated to compensate for object movement and eye movements (i.e. self‐generated movement). We propose that this updating process allows us to interact effectively with a dynamic environment, despite a seemingly rigid, retinotopically organized visual system.
Sébastien Miellet Department of Psychology, University of Fribourg, Switzerland
Mapping qualitative and quantitative information use across cultures during face recognition Face recognition is not rooted in a universal eye movement information‐gathering strategy. Western observers favor a local facial feature sampling strategy, whereas Eastern observers prefer a global face information sampling strategy. Yet, the precise qualitative (the diagnostic) and quantitative (the amount) information underlying these cultural perceptual biases in face recognition remains undetermined. To this end, we monitored the eye movements of Western and Eastern observers during a face recognition task, with a novel gaze‐contingent technique: the Expanding Spotlight. We used 2° Gaussian apertures centered on the observers' fixations expanding dynamically at a rate of 1° every 25ms at each fixation ‐ the longer the fixation duration, the larger the aperture size. Identity‐specific face information was only displayed within the Gaussian aperture; outside the aperture, average face information was displayed to facilitate saccade planning. Thus, the Expanding Spotlight simultaneously maps out the facial information span at each fixation location. Data obtained with the Expanding Spotlight technique show that Westerners extracted more information from the eye region, whereas Easterners more information from the nose region. Interestingly, this quantitative difference was paired with a qualitative disparity. Retinal filters based on spatial frequency decomposition and built from the fixations maps revealed that Westerners used local high‐spatial frequency information sampling, covering all the features critical for effective face recognition (the eyes and the mouth). In contrast, Easterners achieved a similar result by using global low‐spatial frequency information from those facial features. Our data show that the face system flexibly engages into local or global eye movement strategies across cultures, by relying on differential spatially filtered information. Overall, our findings challenge the view of a unique putative process for face recognition.
6
Martin Paré Visual Information Processing Laboratory, Queen’s University, Kingston, Canada
Investigating visual cognition in the macaque monkey Our ability to adjust our behavior flexibly and respond appropriately to competing, and possibly conflicting, signals is enabled by a neural circuit supporting what is known as executive functions, which comprise the overlapping constructs of attention, working memory and inhibitory control. These are fundamental processes in human cognition, and they are impaired in a wide range of neurological and psychiatric disorders. There is no clear homologue of this neurocognitive network in non‐primate mammals, as it is fully defined only in catarrhine primates, thus making the macaque monkey an ideal animal model to study all aspects of executive functions. Recent advances suggest that the neural processes underlying executive functions share not only a common brain circuit but also neural mechanisms, which appear to rest on the voltage‐dependence and slow kinetics of the NMDA receptor activation. This talk will present findings from our investigations of these multiple facets of executive functions in macaque monkeys trained to perform tractable tasks probing attention, memory, and inhibitory control.
Mathieu Servant1,2, Anna Montagnini2 and Boris Burle1 1Laboratoire de Neurosciences Cognitives, UMR7291, Marseille, 2Institut de Neurosciences de la Timone, UMR7289, Marseille
Understanding human perceptual decision‐making in conflicting situations The drift diffusion model (DDM) for two‐choice decisions has proven to account for both behavioral and neurological data in a surprisingly wide range of simple choice paradigms (Ratcliff & McKoon, 2008; Gold & Schadlen, 2007). To make a decision, the model assumes that the brain accumulates noisy samples of sensory evidence over time until a criterial amount has been reached. Recently, it has been demonstrated that the DDM could also explain two well‐known psychological laws that emerge in both detection and choice experiments when the strength of sensory evidence is manipulated. First, mean response times decrease as a power law with increasing stimulus intensity/discriminability (Piéron's law; Piéron, 1913; Van Mannen et al., 2012). Second, the spread of a response time distribution increases with the mean in a linear fashion (Wagenmaker's law; Wagenmakers & Brown, 2007). The aim of our study was to examine whether those laws still hold and whether the DDM could be extended under conflicting sensory influences. To this end, subjects performed Simon tasks (subjects must issue a right or a left response as a function of the color of lateralized stimuli that can appear ipsilateral or contralateral to the response) in which the color saturation was manipulated. Data show that Piéron and Wagenmaker laws hold for saturation, while conflict violates both of them. We propose a conceptual modification of the DDM that can account for our results, namely a time‐varying rate of evidence accumulation.
Claudio Simoncini1, Laurent Perrinet1, Anna Montagnini1, Pascal Mamassian2, Guillermo Masson1 1Institut de Neurosciences de la Timone, UMR7389, Marseille, 2LPP, UMR8158, Paris
Adaptive gain control explains dissociation between perception and action Moving objects generate motion information at different scales, which are processed in the visual system with a bank of spatiotemporal frequency channels. It is not known how the brain pools this information to reconstruct object speed and whether this pooling is generic or adaptive; that is, dependent on the behavioral task. We used rich textured motion stimuli of varying bandwidths to decipher how the human visual motion system computes object speed in different behavioral contexts. We found that, although a simple visuomotor behavior such as short‐latency ocular following responses takes advantage of the full
7
distribution of motion signals, perceptual speed discrimination is impaired for stimuli with large bandwidths. Such opposite dependencies can be explained by an adaptive gain control mechanism in which the divisive normalization pool is adjusted to meet the different constraints of perception and action.
Sara Spotorno1,2, George Malcolm3 and Benjamin W. Tatlera2 1School of Psychology, University of Dundee, 2Institut de Neurosciences de la Timone, 3Department of Psychology, The George Washington University
The use of scene context and object information during visual search in real‐world images This study investigated how the visual system utilises context and object information during the different phases of a visual search task. The specificity of the template (the picture or the name of the target) and the plausibility of target position in real‐world scenes were manipulated independently. In both search initiation and subsequent scene scanning, the availability of a specific visual template was particularly useful when the spatial context of the scene was misleading and, vice‐versa, the availability of a reliable scene context facilitated search mainly when the template was abstract. Target verification was affected principally by the degree of detail of target template, and was shorter in the case of a picture cue. Moreover, the visual salience of the target affected search initiation and scene scanning, whereas the decision time to accept the target was independent of object low‐level properties.
Lotje van der Linden1,2, Françoise Vitu1, Jan Theeuwes2 and Rob Ellis3 1Laboratoire Psychologie Cognitive, UMR7290, Marseille, 2Vrije Universiteit, Amsterdam, The Netherlandsm, 3Plymouth University, United Kingdom
Getting a grip on affordances, attention, and visual fields Seeing a graspable object automatically primes a corresponding grasping movement. In the current series of studies we investigated whether this so‐called affordance effect also occurs when participants do not fixate on, but merely direct their attention covertly towards, an object. Because neuroimaging studies have demonstrated a left‐hemispheric specialisation in automatic visuomotor priming, we furthermore examined whether these affordances vary as a function of the visual field the object is presented in. To this end we presented objects at random (Experiment 1) and fixed (Experiment 2) locations in the periphery of the participants’ visual field whilst monitoring fixation. We hypothesised that affordances would be more pronounced and/ or occurring earlier in time for objects presented in the right visual field (i.e., primarily processed by the left hemisphere). As predicted, we found that directing attention covertly towards an object did cause affordance effects: Participants were faster and more accurate to categorise an object with a grasp that was compatible as compared to incompatible with the object's size. In contrast to our predictions, however, neither the strength nor the time course of these affordances differed between visual fields. These surprising results are discussed in light of previous findings.
Stéphane Viollet Institut des Sciences du Mouvement, Luminy, Marseille
Biomimetic gaze control strategies for robotics Flying insects keep their visual system horizontally aligned suggesting that gaze stabilization is a crucial first step in flight control. Flying insects and birds are also able to navigate swiftly in unknown environments with very few computational resources. They are not guided via radio links with any ground stations and perform all the required calculations onboard. The ability to stabilize the gaze is the key to an efficient visual guidance system, as it reduces the computational burden associated with visuomotor processing. Examples of ethological
8
experiments related to robotic demonstrator will be shown to illustrate the key role of gaze stabilization for autonomous navigation.
Mark Wexler, Pascal Mamassian 1Laboratoire de Psychologie de la Perception, UMR8158, Paris
Large‐sample psychophysics on the internet We have found that individual subjects have robust idiosyncratic biases in perceiving 3D structure (depth order, plane orientation) from motion. Wishing to obtain significant population measures of these biases‐‐to learn whether their distributions are uni‐ or multi‐modal, for example, and whether there is any correlation between the different biases‐‐we have decided to perform an internet experiment in order to obtain a large sample of subjects. This talk will be as much about the technical challenges of programming spatially and temporally precise visual stimuli for display in internet browsers (using HTML5) and the quality of the data that can be expected, as about the results we have obtained.
Posters
Numa Basilio, Antoine Morice, Geoffrey Marti and Gilles Montagne Institute of Movement Sciences Etienne‐Jules MAREY, UMR7287, Marseille
Visually guided overtaking behaviour understood in relation to the maximal acceleration of vehicle: an affordance‐based approach Fajen [1] suggests that affordance‐based approach can explain how successful goal‐directed behaviours are selected and regulated in accordance with our own action capabilities. We test such a framework in a virtual driving simulator by investigating the influence of action boundaries of driven cars when overtaking. A previous experiment [2] reports that drivers use an informational variable based on the Minimum Satisfying Velocity (MSV ) related to the maximum Velocity of the driven car ( maxV ), in order to perform safe overtaking. However, the
perception of overtaking possibility through the maxVMSV ratio cannot account for the influence of more relevant kinematics features of automobiles such as acceleration. We therefore challenged our previous formalization and assumed in this present experiment that a new perceptual variable, defined as the Minimum Satisfying Acceleration (MSA ) divided to the maximum Acceleration of the vehicle driven ( maxA ), would accurately inform drivers about overtaking possibilities. Two groups of drivers were required to perform overtaking manoeuvres, if deemed possible, by driving virtual cars, with one of two maxA values (2 m/s² or 3.5 m/s²). Twenty five overtaking situations were set‐up by combining 5 MSA and 5 MSV values. Our results firstly show that overtaking frequency significantly decreases with increasing MSA and MSV values and becomes null for all groups when extrinsic task constraints exceed group action capabilities for both group of participants. Secondly, we show, as in earlier affordance experiments [3], [4], that the significant between groups differences in an extrinsic scale (i.e., expressed as a function of MSA and MSV values) vanish when an intrinsic scale is applied (i.e., expressed as a function of maxAMSA and
maxVMSV ratios). These results replicate our previous findings and confirm that maxAMSA ratio is a good candidate to account for visually guided overtaking manoeuvres. Future analyses are designed to investigate how maxAMSA ratio can account for the regulation of overtaking. [1] B. R. Fajen, « Affordance‐based control of visually guided action », Ecological Psychology, vol. 19, no. 4, p. 383‐410, 2007.
9
[2] A. H. P. Morice, G. J. Diaz, B. R. Fajen, et G. Montagne, « Can affordance theory account for overtaking behaviour? », presented at the Progress in Motor Control, Marseille, France, 2009. [3] W. H. Warren Jr, « Perceiving affordances: visual guidance of stair climbing », J Exp Psychol Hum Percept Perform, vol. 10, no. 5, p. 683‐703, oct. 1984. [4] W. H. Warren Jr et S. Whang, « Visual guidance of walking through apertures: body‐scaled information for affordances », J Exp Psychol Hum Percept Perform, vol. 13, no. 3, p. 371‐383, août 1987.
Maria Bermudez1, Dora Courbon2, Frédéric Barthelemy1, Guillaume Masson1 and Ivo Vanzetta1 1Institut de Neurosciences de la Timone, UMR7389, Marseille, 2Université Paris 5, Paris
Effect of temporal frequency, color and contrast on behavioral performance and neuronal population activity in V4 of the behaving macaque Based on human psychophysics, Gegenfurtner and Hawken (1995, 1996) suggested that both motion and color information is processed along two channels in the primate visual system, both sensitive to color and motion: one specialized in the treatment of rapid and the other in the treatment of slow motion, in particular of chromatic, isoluminant stimuli. Whereas the known properties of MT neurons could account for the first channel, it has been proposed that visual area V4 could provide the neuronal substrate for the second one (Gegenfurtner & Hawken, 1996). We have previously provided support for this hypothesis, reporting that neuronal population responses in monkey V4 to isoluminant slowly moving stimuli are stronger than to fast, luminance‐based, ones (SFN 2009). However, a shortcoming of that study was the absence of information on the behavioral performance of the monkey, rising the question of how much the neuronal responses observed in V4 are relevant for the monkey’s perception. More generally it remains to be ascertained whether the above conclusions drawn from human psychophysical data hold true also for monkeys. To address these questions, we trained a monkey on a direction‐of‐movement discrimination task: it had to report the perceived movement of a grating that could drift left‐ or rightwards, by making a saccade to one of two targets presented after the stimulus. The grating could be isoluminant or luminance‐based, and could drift slowly or fast (temporal frequency of 1 and 8 Hz). To obtain a psychometric contrast response curve for each of the 4 grating categories, all stimuli were presented at several contrast levels and, at each contrast level, the monkey's performance was calculated as the correct trials vs. total trials ratio. The psychometric curve obtained with the 8Hz luminance stimuli was different from those obtained with all the other stimulus categories and so was the curve obtained with the 1Hz isoluminance stimuli, indicating that a different neuronal mechanism dominated in the two cases, as has been suggested for humans. The curves obtained with the other two sets of stimuli were statistically indistinguishable, indicating a mixed contribution of the two underlying neuronal mechanisms on visual motion perception. As a control, the neuronal population activity recorded with optical imaging in the same monkey using the same stimuli confirmed our previously reported results (SFN 2009). As opposite to humans, in the macaque the performance for the 1Hz isoluminant grating was weakest. This difference might be explained by a different red‐green cones ratio in monkeys and humans, resulting in a different point of equiluminance.
Soazig Casteau and Françoise Vitu Laboratoire de Psychologie Cognitive, UMR6146, Marseille
The Global Effect: Dr Jekyll & M. Hide The Global Effect (or GE) is the tendency to move the eyes towards the centre of gravity of the global peripheral configuration formed by two (or more) spatially‐proximal stimuli. It is
10
assumed to reflect low‐level spatial integration mechanisms at the level of the Superior Colliculus (SC), a midbrain structure controlling the generation of saccadic eye movements. However, in contrast with this widely accepted hypothesis, some authors proposed that the GE reflects intentional visual strategies. To contrast these two hypotheses, we tested the properties of the GE across the vertical and the horizontal meridian. On experimental trials, the saccade‐target object appeared simultaneously with a distractor stimulus at a mirror, symmetric location around one of the meridians, while it was displayed with no distractor in control trials. Both the eccentricity of the stimuli (2 & 4°) and the angular separation between them (15 & 25°) were manipulated, as well as the uncertainty of target direction (maximal vs. minimal). Results showed that two stimuli presented at mirror locations induced a GE but more largely when the uncertainty of target direction was maximal. The modest GE observed under minimal uncertainty increased with the angular separation between distractor and target, while the GE in maximal‐uncertainty conditions remained unaffected by angular separation and was associated with an increased variability of landing sites. Thus, there might be two different types of GE: one deriving from distributed spatial coding in the SC and another reflecting intentional behaviour.
Céline Cavézian1, Julien Barra1, Karine Doré‐Mazarsa2, Cécile Issard1 and Boris New1,2 1Laboratoire Vision, Action, Cognition, Université Paris 5, 2Institut Universitaire de France
A new perceptual bias: the letter and word magnification effect The "word superiority effect" (Reichier, 1969) is that letters are more easily identified when they are presented in a word rather than in a nonword or in isolation. This perceptual bias has been explained through the interactive activation model. In this study we investigated whether overactivation of features could lead to another perceptual bias wherein letters are perceived as taller than pseudoletters, or words taller than pseudowords. In order to test this hypothesis, we used a size comparison task in which participants had to decide whether two stimuli, one a letter (or a word) and the other a pseudoletter (or a pseudoword), are of identical or different height. Our results show that subjects perceive letters and words as taller than pseudoletters or pseudowords of physically identical height.
Romain Chaumillon and Alain Guillaume Laboratory of Neurosciences of Cognition, UMR7291, Marseille
The Poffenberger Paradigm revisited: Behavioural and Electrophysiological data in favour of an impact of Ocular Dominance. The dominant eye is the one we unconsciously choose when we have to align a target in peripersonal space with a more distant point. Previous works showed that monocular visual stimulation of this dominant eye (DE) leads to a faster and greater activation in the ipsilateral hemisphere (e.g. Shima et al. 2010; Neuroreport 21(12), 817‐21). Here we tested, firstly through behavioral measures, whether this special relationship could have consequences in visuo‐motor processes. We used a simple Poffenberger Paradigm in which subjects had to press as quickly as possible a central button after the appearence of lateralized targets. Right and left handers with right or left DE participated in the study. In right handers, we observed shorter reaction times (RT) when the stimuli were in the controlateral visual field with respect to the DE compared to RT for stimuli in the ipsilateral visual field. This impact of the DE in the Poffenberger Paradigm may explain the rather large variability often reported in the litterature (see for example Iacoboni et al. 2000). In left handers, we observed a different and lower influence of the DE. Secondly, we are currently assessing the influence of DE on interhemispheric transfer time (IHTT) through electrophysiological measures. Still in the Poffenberger paradigm, we use EEG recordings to precisely evaluate the IHTT (e.g. Rugg et al. 1984; Neuropsychologia 22(2), 215‐25). Preliminary analyses of these data in right handers are in favour of higher IHTT values in subjects with left DE compared to those with right DE. In
11
conclusion, all these data show an impact of the DE both on hemifield advantage and on IHTT. This suggests that the influence of the DE has to be considered in visuo‐motor and interhemispheric transfer studies.
Shahrbanoo Talebzadeh (Hamel), Dominique Houzet, Nathalie Guyader and Denis Pellerin GIPSA‐lab, UMR5216 , Grenoble
Influence Of Color On Eye Movements During Dynamic Scene Exploration Does Color Independently From Luminance Influence Eye Movements? Contrast Of Color And Contrast Of Luminance Are Two Combined Features That Deploy Human Visual Attention While Exploring Visual Scenes. Yet There Are Few Studies That Investigate Color Features Contribution On Visual Attention Independent From Luminance Features. Here We Studied Impact Of Color On Eye Movements During Dynamic Scene Exploration. We Gathered A Data Set Of 35 Subjects’ Eye Movements When Viewing Video Clips In Two Conditions: Color And Gray Scale. We Found That Influence Of Color On Eye Positions Depends On Number And Location Of Objects In Visual Scene And Varies Across Observation Time. Furthermore We Found Several Differences Between Two Conditions For Saccades Amplitude And Fixations Duration. These Results Would Be Depicted Precisely In Poster Presentation.
Frédéric Isel, Xavier Aparicio, Karin Heidlmayr, Christelle Lemoine and Karine Doré‐Mazars Institut of Psychology, University Paris 5
Does multilingual code switching impact eye movement control? Evidence from pro‐ and anti‐saccade tasks It has been shown that bilinguals perform better in cognitive tasks involving inhibitory control (Bialystok, Craik & Luk, 2008). These higher inhibition’s capacities are assumed to be due to regular use of code switching. Here we questioned whether top‐down inhibition trained by a cognitive activity like code switching does impact the realization of motor tasks involving an oculomotor control like the anti‐saccade task (AS; Munoz & Everling, 2004). Saccadic responses were measured in 11 monolinguals and 9 late bilinguals. Separate prosaccade (PS, i.e. automatic response toward the target) and AS (requiring inhibiting the automatic PS) sessions as well as one mixed session (i.e. switch between PS and AS responses) were used. Blocked sessions were repeated twice, before and after the mixed session. Our preliminary results show that, as expected, PS latencies were shorter than AS ones, in both groups. Moreover, a training effect was found with shorter latencies in the second blocked session than in the first one, in less extend in bilinguals. We also found that bilinguals were globally faster than monolinguals. Critically, the switch condition only slowed latencies of PS in monolinguals, while such an effect seems to not occur in bilinguals. Taken together, our data suggest that top‐down inhibition trained by a cognitive activity could impact the realization of motor tasks.
Mina Khoei, Laurent Perrinet and Guillaume Masson Institut de Neurosciences de la Timone, UMR7389, Marseille
The role of prediction in motion extrapolation The visual system uses a prior information on the temporal coherency of motion which posits that moving objects in nature travel more probably along 'smooth trajectories' and more rarely change their route abruptly. Consequently, the visual system may take advantage of this regularity to predict future sensory input consistent with already observed ones. Such prediction may ease sensory computations in two ways: 1‐ Disambiguation of local motion information and integration toward a global scale 2‐ Alerting about an ongoing and possibly occluded stimulus and extrapolation of its most possible trajectory. This hypothesis has been
12
postulated for a long time but it is only recently that it has been quantitatively investigated (Yuille et al.,1989; Burgi et al., 2001). Extending these models, we have implemented a realistic probabilistic framework using particle filtering (Perrinet & Masson, 2012, Neural computation). We have shown that it offers a sufficient solution to the aperture problem. Here, we focus on extrapolation of motion information, a mechanism by which the visual system infers current position of visual motion, knowing its recent history. We investigated the response dynamics of the model to transient disappearance, or unexpected stopping of a point‐like stimulus moving along a smooth trajectory. Moreover, the signal‐to‐noise ratio of sensory evidence was manipulated using different levels of stimulus contrast. We imposed a short period of blanking to the stimulus and manipulated the timing and duration of blank. During blank and right after reappearance of dot's stimulus, we have quantified the relative contributions of prediction and stimulus on tracking. Before the blank, we found that accumulation of information from trajectory leads to the emergence of tracking above a threshold. Then, we have shown that the strength of motion extrapolation depends on the tracking state of the system when the blank occurs. What matters most is if disappearance occurs before or after emergence of tracking, a result which is in agreement with physiological evidence.
Nicolas Lebar, Laurence Mouchnino, Alain Guillaume, Jean Blouin Laboratoire de Neurosciences Cognitives, UMR7291, Marseille
Cortical modulation of visual input in associative visual areas during visuomotor adaptation The literature shows evidences of cortical modulations of sensory inputs during motor learning under conflicting visuo‐proprioceptive information. Bernier et al. (Cerebral Cortex 2009) found that proprioceptive cortical input was attenuated when learning to draw a figure by controlling the movement with a mirror vision. This cortical attenuation is thought to reduce the conflict between proprioceptive and visual information and to enhance motor performance. After practice, both the performance and somatosensory input returned to normal baseline level. Here, we hypothesised that in such visuo‐proprioceptive conflict task, the reduction of somatosensory input is accompanied by a facilitation of the visual input to increase motor performance. To test this hypothesis, we used a similar "drawing mirror task" as in Bernier et al. (2009). We used the electroencephalographic technique to record cortical potentials (VEP, amplitude of the P1‐N1 and N1‐P2 components) evoked by flashes of an LED (1.2 Hz) attached near the tip of the hand‐held stylus while subjects (6 women and 6 men) were learning the mirror drawing task. The hand‐pen configuration was such that the flashes always appeared in the right visual field. Drawing performance was also measured. We found with a subgroup analysis based on subjects' performance that the 6 best "performers" had significantly greater VEP amplitude than the other subjects in associative visual areas, i.e. extrastriatal and inferior parietal cortices. These areas are known to play key roles in detecting incongruent stimuli and in visuomotor processes, respectively. Importantly, we did not find modulations of P1‐N1 and N1‐P2 components in primary visual areas. The results suggest that the central nervous system is able to exert modulation of visual input at the cortical level according to the adaptive context. They also show a link between the electrophysiological signals and the subjects' performances, suggesting the establishment of an natural neural strategy to adapt to the task demand.
Christelle Lemoine1, Thérèse Collins2, Jacqueline Fagard2 and Karine Doré‐Mazars1 1Laboratoire Vision Action Cognition, 2Laboratoire Psychologie de la Perception
Saccadic adaptation in 10‐40 month‐old infants The accuracy of saccadic eye movements is maintained by a mechanism known as saccadic adaptation. Saccadic adaptation adjusts the amplitude of saccades to correct for previous
13
targeting errors, and is responsible for the recovery of accurate targeting behavior in patients with extraocular deficits. In the laboratory, adaptation can be observed in healthy adults by displacing the target during the saccade towards it; for example, backwards target displacements evoke amplitude reductions. We studied how this adult plasticity developed with age. We developed a novel method for measuring adaptation in 10‐40 month‐old infants, in which a single animated target was presented at different successive locations 10° apart on a screen. When the infants made a saccade to the target, the target stepped back (i.e. in the direction opposite to the saccade) by 2° or 3°. Although saccadic undershoot was greater in our infant sample than classically observed in adults, our procedure evoked a reduction of amplitude. In a control session, infants performed the same number of saccades but the target was never displaced. Interestingly, amplitude tended to increase in this condition, showing that the amplitude reduction in response to the target displacement was not the result of fatigue or disinterest. These preliminary results suggest that the mechanisms responsible for saccadic plasticity may be functional in young infants.
Sophie Lemonnier1,2, Roland Brémond2 and T. Baccino1 1université Paris 8, LUTIN, 2Université Paris Est, IFSTTAR, LEPSiS
To what extent visual search depends on the task? The role of attention The task guides attention, while attention guides visual search. However, a complex goal includes several sub‐tasks prioritized with a hierarchical organization. Considering a complex, real‐life task, is it possible from gaze patterns to distinguish two complex situations of the same activity, and to identify the relative weight of these sub‐tasks? We have considered this question with a real‐life complex situation: driving a car. More specifically, we consider the driving task during the anticipation of a crossroads, and on straight roads. In a series of two experiments, in a driving simulator and on the road, we have manipulated two independent variables: the priority rule (given by a road sign), and the level of traffic on the road the driver want to cross. We have measured behavioral variables during the experiment, both from the car (acceleration, braking, steering angle) and from the driver (eye tracking). Both experiments have pros and cons. In the driving simulator experiment, the behavior is less natural compared to driving on the road, and ecological validity issues emerge. On the other hand, the experimenter has a full control of the variables (vs. changing weather on the road, visual masking, local differences between crossroads, etc.), and all subjects pass the experiment in the same controlled conditions. Our work is still in progress. The data acquisition is completed (with 36 participants) in the driving simulation experiment conducted at the IFSTTAR, LEPSIS, and the data analysis in progress. In the field study, data acquisition is still in progress at the LRPC St Brieuc (25 participants to date). To explain our data, we plan to use a quantitative model, derived from Wickens SEEV model. It predicts the relative fixation time in different areas of interest.
Laurent Madelain1,2, Anna Montagnini2 and Guillaume Masson2 1Psychology, Ureca ‐ Univ. Lille 3, Villeneuve D'Ascq, 2Institut de Neurosciences de la Timone, UMR7289, Marseille
Tracking the footstep illusion: Effects of transient contrast‐induced perceived‐velocity perturbations on smooth pursuit. Low‐contrast stimuli consistently appear slower than the same targets presented at higher contrast such that there is a linear increase in smooth pursuit eye velocity gain with increasing contrastl. Here we ask whether one may use the footstep illusion to probe the effects of transient perceived‐velocity perturbations on smooth pursuit. We had three subjects tracking a bright yellow horizontal bar translating over a black‐and‐white grating so that the target would pass over a black and a white stripes every 500ms. Subjects tracked the horizontal motion peri‐foveally 1 deg below the stimulus. On control trials, the luminance of
14
the yellow translating bar was such that the contrast was constant with respect to both the white and black stripes. We found that in the high luminance trials, pursuit velocity gain was oscillating at a frequency locked to the grating’s frequency. On the other hand, when the luminance of the yellow bar was low, eye velocity oscillations were reduced. In a second experiment, the paradigm was identical except that at a variable point of time in the trial, the luminance was high for 500 ms (1 grating cycle) before returning to a low luminance level. Again we found that pursuit gain was approximately constant when the target’s contrast was identical across the black and white stripes, but when the luminance transiently changed we observed a transient oscillation in velocity gain. These results extend previous work on the effects of luminance contrast on smooth pursuit and provide a new tool to selectively and transiently perturb pursuit eye velocity in the absence of transient perturnation in eye position.
Julie Quinet and Laurent Goffart Institut de Neurosciences de la Timone, UMR7289, Marseille
Saccades toward a transient moving visual target in the inexperienced rhesus monkey It has been proposed that visual motion signals are used by the saccadic oculomotor system to extrapolate the future position of a moving target and to generate a saccade that brings the target image onto the fovea at the time of saccade landing. However, in most neurophysiological studies, these interceptive saccades were recorded in animals that were already trained to track a target whose motion velocity was already known. In other words, the estimated future target position corresponded to a position that was expected because of the animal’s training. To determine whether extrapolating the future position target was due to the prior knowledge of its motion that was acquired during previous training sessions or to the target motions that actually preceded the onset of saccades, we tested in the inexperienced rhesus monkey, whether transient target motions (as short as 50 or 100ms) were sufficient to influence the landing position of saccades toward such a fleeting target. More particularly, we tested saccades launched from a central target toward a peripheral one that was located on the vertical meridian (16 deg above or below the central target) and that moved horizontally to the left or to the right with a constant velocity (20°/s) (no gap or 200ms gap separated the motion onset of the peripheral target from the offset of the central target) and for durations ranging from 50 to 800ms. Different motion durations were tested on different days, with short duration tested during the first day and the longer ones during the following training procedures. The results show that, whether the target moved during 50 or 100ms, the mean horizontal final eye position of saccades was, for only two visual quadrants, deviated beyond the mean final position of comparable saccades aimed at a static target at the same location. Moreover, when the mean horizontal final eye position was compared between saccades toward a target moving for 50 versus 100ms, the eye landed on more eccentric positions in ¾ of the quadrants when the target moved for 100ms. Finally, the horizontal position of saccade endpoints increased with the time elapsed from the target onset to the saccade end (response time). A correlation was indeed found between the horizontal final eye positions and the response times, for almost all target motion durations (from 50ms to 800ms) and for all visual quadrants. This dependency was not observed in saccades that were aimed at a static target. Further experiments are undertaken to test whether this dependency depended upon the target velocity or whether it reflected the last position of the target that was used to guide the trajectory of saccade.
15
Sébastien Roux1, Frédéric Matonti1,2, Virginie Donnadieu1, Louis Hoffart1,2, Sadok Gharbi3, Catherine Pudda3, Fabien Sauter4, Vincent Agache3, Regis Guillemaud3 and Frédéric Chavane1 1INT CNRS and Aix‐Marseille University, Marseille, 2Ophthalmology Dept. Timone Hospital, Marseille, 3CEA‐Leti, Grenoble, 4EA‐Minatec, Grenoble
Cortical readout of prosthetic vision: a parametrical study. Stimulating the retinal network is seriously considered as a tool to restore visual function in a large number of human retinal pathologies (ARMD and RP). To test the functional impact of retinal implants, we made quantitative comparisons between visual and electrical evoked activities at the level of the population in the primary visual cortex. As implants, we used sub‐retinal matrices of 1mm diameter supporting 9 to 17 electrodes (CEA‐LETI, Grenoble). Using visual stimulation, retinotopic organization, retino‐cortical magnification factor, stimulus intensity response function and dependence to stimulus size were characterized. These benchmarks were then used to interpret the electrically‐evoked cortical activities induced by the retinal implants in terms of their hypothetic visual counterparts. By manipulating the size of our electrical stimulation (single electrode vs. whole implant), we generated activations of increasing size in the cortex at the expected retinotopic position. However, these cortical activations were much larger than predicted by the size of the implants, suggesting a non‐linear recruitment of the retinal network with potential diffusion of the electrical currents. We observed that manipulation of a single parameter of the electrical patterns, such as intensity, leads to modulations of cortical activation along several dimensions and could affect the size as well as the brightness of the percept as recently shown in human patients. To overcome these pitfalls, new electrical stimulation paradigms were designed and tested that allow focalizing cortical activations to match visual evoked activity. These observations are important for the development of functional retinal implants and optimal features of electrical patterns. We think that providing embedded functional tests of the implant is a necessary step for the progress in this field.
Manuel Vidal1,2, Victor Barrès1,3 1Laboratoire de Physiologie de la Perception et de l’Action, UMR7152, Paris, 2Institut de Neurosciences de la Timone, UMR7289, Marseille, 3Neuroscience department, University of Southern California, USA
Hearing lips and (not) seeing voices: Audiovisual integration with suppressed percepts In binocular rivalry, sensory input remains the same yet subjective experience fluctuates irremediably between two mutually exclusive representations. We investigated whether a suppressed viseme can still produce the McGurk effect with an auditory input, which would indicate that unconscious lips motion sometimes modulate the perceived phoneme through the well‐known multisensory integration mechanism. We used speech stimuli for it involves robust audiovisual interactions at several cortical levels. The perceived phoneme when presenting a synchronous voice saying /aba/ together with rivaling faces saying /aba/ and /aga/ was recorded for the seven McGurk sensitive participants. We found that when the dominant percept was seen with the non‐dominant eye, in about 20% of the trials the audiovisual outcome resulted from an integration with the suppressed percept. This integration could either produce or cancel the McGurk effect that was expected with the actually seen viseme. This observation raises serious questions in the fields of speech perception and multisensory binding, suggesting that feature binding might not be a prerequisite to perceptual awareness. Further experiments are being conducted to determine whether the information binding failed within the same modality (color and lips motion) or between the two modalities (lips motion and voice).
16
III. List of participants
Last name First name email Institution City Basilio Numa numa.basilio@univ‐amu.fr ISM Marseille Bermudez Maria maria.bermudez@univ‐amu.fr INT Marseille Blouin Jean jean.blouin@univ‐amu.fr LNC Marseille Brémond Roland [email protected] LUTIN/IFSTTAR, Paris 8 Paris Casteau Soazig soazig.casteau@univ‐provence.fr LPC Marseille Castet Éric eric.castet@univ‐amu.fr LPC Marseille Cavezian Céline [email protected] LVAC, Paris 5 Paris Chaumillon Romain romain.chaumillon@univ‐amu.fr LNC Marseille Chavane Frédéric frederic.chavane@univ‐amu.fr INT Marseille Cottereau Benoit [email protected]‐tlse.fr CerCo Toulouse Danion Frédéric frederic.danion@univ‐amu.f INT Marseille Doré‐Mazars Karine [email protected] LPNC, Paris 5 Paris Gorea Andreï [email protected] LPP Paris Guillaume Alain alain.guillaume@univ‐amu.fr LNC Marseille Hamel Shahrbanoo shahrbanoo.hamel@gipsa‐lab.grenoble‐inp.fr GIPSA Grenoble Isel Frédéric [email protected] Institut Psycho, Paris 5 Paris Khoei Mina mina.aliakbari‐khoei@univ‐amu.fr INT Marseille Knoblauch Kenneth [email protected] SBRI Lyon Lebar Nicolas [email protected] LNC Marseille Lefumat Hannah hannah.lefumat@univ‐amu.fr ISM Marseille Lemoine Christelle [email protected] LVAC, Paris 5 Paris Lemonnier Sophie [email protected] LUTIN/IFSTTAR, Paris 8 Paris Lorenceau Jean [email protected] CRICM Paris Madelain Laurent [email protected] Lilles 3/INT Marseille Mamassian Pascal [email protected] LPP Paris Massendari Delphine [email protected] LPC Marseille Masson Guillaume guillaume.masson@univ‐amu.fr INT Marseille Mathôt Sebastiaan [email protected] LPC Marseille Miellet Sébastien [email protected] University of Fribourg Fribourg, CH Montagnini Anna anna.montagnini@univ‐amu.fr INT Marseille Paré Martin [email protected] Queens University Kingston, CA Pellerin Denis denis.pellerin@gipsa‐lab.grenoble‐inp.fr GIPSA Grenoble Perrinet Laurent [email protected] INT Marseille Quinet Julie julie.quinet@univ‐amu.fr INT Marseille Roux Sébastien [email protected] INT Marseille Sarlegna Fabrice fabrice.sarlegna@univ‐amu.fr ISM Marseille Scotto Cécile cecile.scotto‐di‐cesare@univ‐amu.fr ISM Marseille Servant Mathieu [email protected] LNC Marseille Simoncini Claudio claudio.simoncini@univ‐amu.fr INT Marseille Spotorno Sara [email protected] University of Dundee Dundee, UK van der Linden Lotje [email protected] LPC Marseille Vidal Manuel manuel.vidal@univ‐amu.fr INT Marseille Viollet Stéphane stephane.viollet@univ‐amu.fr ISM Marseille Wexler Mark [email protected] LPP Paris Yao‐N'Dré Marina [email protected] LPC Marseille
17
IV. Venue
The GDR Vision will take place on the Campus Santé La Timone, in Marseille, which is located at the following address:
Campus Santé La Timone Faculté de Médecine, 27, boulevard Jean Moulin 13005 Marseille
The talks and poster sessions will be held in the Seminar room at the ground floor of the Institut des Neurosciences de la Timone (INT, “C” on the map). Lunch will take place on the roof terrace of the institute (4th floor). Arriving by plane to the Marseille‐Provence airport (airport code: MRS), take the shuttle ("navette") to the city, namely to Marseille's main train station which is called "Marseille Saint Charles": http://www.navettemarseilleaeroport.com/index.php Then take the subway, line 1 (blue), toward "La Fourragère", stop at "La Timone" (that's the sixth stop) and take the "Hôpital de la Timone" exit out of the subway station (“A” on the map). If you come from the Vieux Port, you just have to take the same metro line from the Vieux Port metro station. Then, there's a five‐minute walk described below. The INT building (“B” on the map) is the furthest right‐most building within the campus of the medical school of the "Université de la Méditerranée".