Social Cognitive Neuroscience

download Social Cognitive Neuroscience

of 10

Transcript of Social Cognitive Neuroscience

  • 8/11/2019 Social Cognitive Neuroscience

    1/10

    Social cognitive neuroscience and humanoid robotics

    Thierry Chaminade a,*, Gordon Cheng b

    a Mediterranean Institute for Cognitive Neuroscience (INCM), Aix-Marseille University CNRS, 31 Chemin Joseph Aiguier, 13402 Marseille Cedex, Franceb Department of Electrical Engineering and Information Technology, Cluster of Excellence Cognition for Technical Systems CoTeSys, Barer Str. 21, Technical

    University Munich, 80290 Munich, Germany

    a r t i c l e i n f o

    Keywords:

    Robotic

    Humanoid

    Human

    Cognition

    Neuroscience

    Social interactions

    a b s t r a c t

    We believe that humanoid robots provide new tools to investigate human social cognition, the processesunderlying everyday interactions between individuals. Resonance is an emerging framework to under-

    stand social interactions that is based on the finding that cognitive processes involved when experiencing

    a mental state and when perceiving another individual experiencing the same mental state overlap, both

    at the behavioral and neural levels. We will first review important aspects of his framework. In a second

    part, we will discuss how this framework is used to address questions pertaining to artificial agents

    social competence. We will focus on two types of paradigm, one derived from experimental psychology

    and the other using neuroimaging, that have been used to investigate humans responses to humanoid

    robots. Finally, we will speculate on the consequences of resonance in natural social interactions if

    humanoid robots are to become integral part of our societies.

    2009 Elsevier Ltd. All rights reserved.

    1. Introduction

    Humanoid robots are robots whose appearance resembles that

    of a human body, in our case a robot with two legs, two arms

    and a head attached to a trunk. Because of this anthropomorphism,

    they provide relevant testbeds for hypotheses pertaining to human

    cognition. The phrase understanding the brain by creating the brain

    was coined to synthesize how humanoid robots and computational

    neuroscience could contribute to progresses in naturalizing human

    psychology and the underlying neurophysiology (Asada et al.,

    2001; Brooks, 1997; Cheng et al., 2007; Kawato, 2008). Here, we

    will discuss the application of this adage to the investigation of so-

    cial interactions, on the premise that robots provide testbeds for

    hypotheses pertaining to natural social interactions.

    The distinction we wish to make here is with past approaches

    that placed focuses on behavior syntheses as the core of cogni-

    tion (Arkin, 1998; Atkeson et al., 2000; Brooks, 1997) but,although said to be biologically-inspired, had little direct input

    from biological sciences. In contrast we wish to bring forward a

    direct connection between humanoid robotics and social

    cognitive neurosciences, in an endeavor to gain:

    1. a better understanding of social interactions of humanhuman

    and humanmachines (Chaminade, 2006; Chaminade and Dec-

    ety, 2001);

    2. deeper understanding of brain functions involved in these inter-

    actions (Chaminade et al., 2007);

    3. better engineering guidelines in building machines (as sug-

    gested byCheng et al., 2007) suitable for human interactions.

    In this review, we will provide examples of how robots can be

    used to test hypotheses pertaining to human social neuroscience,

    both in behavioral (Section 3.1) and neuroimaging (Section 3.2)

    experiments, but also how social cognitive neurosciences can pro-

    vide insights for developing socially competent humanoid robots

    (Section 4.1). First, we will present a brief history of humanoids

    development.

    The last decade has seen the emergence of increasingly autono-

    mous humanoids, and eventually of androids. Hondas humanoids

    P2, in 1996, followed by P3 in 1997 and ASIMO in 2000 (Hirai et al.,

    1998; Sakagami et al., 2002), were among the first humanoids

    walking on their legs and feet (Fig. 1) and eventually climbingstairs and navigating autonomously, that stunned the world by

    going public: human-like robots were on their way from fiction

    to reality. SONY produced QRIO (Fig. 1) for entertainment purposes

    (Nagasaka et al., 2004), and the Humanoid Robotics Project inves-

    tigate practical applications of humanoid robots (HRP series) coop-

    erating with humans (Hirukawa et al., 2004). Fundamental

    developments in humanoid research also started their investiga-

    tions with bipedal walk, as early as the mid-1960s (Waseda

    Lower-Limb series), then started to use humanoids as the embod-

    ied platform necessary for certain application, with actuators and

    sensors approximating human motor and sensory processes in

    order to simulate human intelligence (Brooks, 1997). The use of

    0928-4257/$ - see front matter 2009 Elsevier Ltd. All rights reserved.doi:10.1016/j.jphysparis.2009.08.011

    * Corresponding author.

    E-mail address: [email protected](T. Chaminade).

    Journal of Physiology - Paris 103 (2009) 286295

    Contents lists available at ScienceDirect

    Journal of Physiology - Paris

    j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / j p h y s p a r i s

    http://dx.doi.org/10.1016/j.jphysparis.2009.08.011mailto:[email protected]://www.sciencedirect.com/science/journal/09284257http://www.elsevier.com/locate/jphysparishttp://www.elsevier.com/locate/jphysparishttp://www.sciencedirect.com/science/journal/09284257mailto:[email protected]://dx.doi.org/10.1016/j.jphysparis.2009.08.011
  • 8/11/2019 Social Cognitive Neuroscience

    2/10

    humanoids to understand the brain is now at the core of many

    projects, such as RoboCub, a European project investigating humancognition, and in particular developmental psychology, through

    the realization of a humanoid robot the size of a 3.5 year old child,

    iCub (Sandini et al., 2004). The humanoid robots DB and CB, pro-

    duced in two projects headed by Mitsuo Kawato, were used in

    some studies reported here. In the ERATO project, the robotic

    group, led by Dr. Stefan Schaal and in collaboration with the re-

    search company SARCOS (Hollerbach and Jacobsen, 1996), devel-

    oped a humanoid robot called DB (Dynamic Brain) replicating a

    human body given the robotics technology of the mid 1990s

    (Fig. 1). It was followed by the ICORP Computational Brain Project

    in which Dr. Gordon Cheng, again in collaboration with SARCOS,

    developed a new humanoid robot called CB (Computational Brain,

    Cheng et al., 2007), more accurate in reproducing the human body

    than DB (Fig. 1).Because they reproduce part of the human appearance,

    humanoids provide testbeds for hypotheses pertaining to natural

    social interactions. They are used for researching how global hu-

    man-like appearance influences our perception of other agents, in

    comparison to real humans or, at the other end of the spectrum,

    industrial robotic arms. This is even more so of androids, a spe-

    cific type of humanoids that attempt to reproduce the human

    appearance not only in their global shape, but also their fine-

    grained details. Interestingly, the acceptability of androids in

    everyday application has been described by the Uncanny Valley

    of Eeriness hypothesized by Japanese roboticist Masahiro Mori

    (Mori, 1970). While one would expect that social acceptance of

    robots would increase with anthropomorphism, the uncanny

    valley hypothesis postulates that artificial agents attempting,but imperfectly, to impersonate humans, the case of androids, in-

    duce a negative emotional response (MacDorman and Ishiguro,

    2006; Mori, 1970). While this hypothesis has proved itself

    impractical, as neither anthropomorphism nor emotional re-

    sponse easily lend themselves to being described by one-dimen-

    sional variables, understanding the cognitive mechanisms

    underlying the feeling of uncanniness that one experiences when

    facing an android will be invaluable to understanding human so-

    cial cognition; this is one of the objectives of the emerging field ofandroid science (MacDorman and Ishiguro, 2006). Androids indis-

    tinguishable from humans in terms of form, motion and behav-

    iors, a goal not unlike the Total Turing Test Stevan Harnad

    proposed (Harnad, 1989), would be invaluable for research by

    providing fully controlled partners in experimental social interac-

    tions. While artificial conversational abilities at the core of the

    original Turing Test (Turing, 1950), including language, semantics

    and symbolism, are beyond the scope of the present article, the

    concept of a robot passing a Total Turing Test highlights the

    possible outcomes of bidirectional exchanges between robotic

    developments and research in human cognition.

    The goal of this review is not to provide definitive answers

    about optimized robot design in the form of a series of guidelines

    for roboticists, but to present an overview, based on our works,

    on how robotics and cognitive sciences can work together towards

    the goal of developing social humanoids. We will rely on one the-

    oretical framework that fueled our work, the hypothesis of motor

    resonance, that pertains to embodied social interactions with a fo-

    cus on actions. After a section describing this framework, a second

    part will present pertinent experimental results obtained using ro-

    botic devices, and a last part will attempt to derive guidelines for

    improving the social competence of interacting humanoids based

    on this framework.

    2. Motor resonance in social cognition

    Theories of social behaviors using concepts of resonance have

    flourished in the scientific literature following the finding thatthe same neural structures show an increase of activity both when

    executing a given action and when observing another individual

    executing the same action (Blakemore and Decety, 2001; Gallese

    et al., 2004; Rizzolatti et al., 2001). Neuropsychological findings,

    that used action production, perception, naming and imitation,

    hinted, in the early 1990s, that limb praxis and gesture perception

    share some parts of their cortical circuits (Rothi et al., 1991). Sim-

    ilarly in language, the motor theory of speech perception claimed,

    on the basis of experimental data, that the object of speech percep-

    tion are not sounds, but the phonetic gestures of the speaker,

    whose neural underpinnings are motor commands (Liberman

    and Mattingly, 1985). We will refer to these processes under the

    header of motor resonance, which is defined, at the behavioral

    and neurallevels, as the automatic activation of motor control sys-

    tems during perception of actions.

    2.1. Neurophysiology of resonance

    Mirror neurons offered the first physiological demonstration

    that motor resonance had validity at the cellular level. Mirror neu-

    rons are a type of neuron found in the macaque monkey brain and

    defined by their response, as recorded by single cell electrophysio-

    logical recordings. First reported in 1992 by Giacomo Rizzolattis

    group in Parma (di Pellegrino et al., 1992), they were officially

    named mirror neurons in a 1996 Cognitive Brain Research report

    as a particular subset of F5 neurons [which] discharge[s] when the

    monkey observes meaningful hand movements made by the

    experimenter (Gallese et al., 1996). The importance of this discov-ery stems from the known function of area F5, a premotor area in

    Fig. 1. Center: SONY humanoid robot QRIO (photo courtesy of SONY). Clockwise

    from left, bottom: HONDA humanoid robots P3 and ASIMO (Advance) (photo

    courtesy of HONDA); infanoid (photo courtesy of Hideki Kozima); ATR humanoid

    robot, DB (co-developed withSARCOS during the JST Kawato dynamic brainproject.

    Photo courtesy of Stefan Schaal); CB (co-developed with SARCOS during the

    computational brain project. Photo by Jan Moren, courtesy of Gordon Cheng).

    T. Chaminade, G. Cheng/ Journal of Physiology - Paris 103 (2009) 286295 287

  • 8/11/2019 Social Cognitive Neuroscience

    3/10

    which neurons discharge when monkeys execute distal goal-direc-

    ted motor acts such as grasping, holding or tearing an object. Com-

    paring the various reports, it is reasonable to assume that around

    20% of recordable neurons in this area have mirror properties in

    a loose sense, with a lower percentage, around 5%, shows action

    specificity, i.e. the same action is the most efficient in causing

    the neuron to fire when the monkey observes and when he exe-

    cutes it. These neurons are activated both during the execution

    of a given goal-directed action and during the observation of the

    same action made in front of the monkey.

    The human physiological data, using the brain imaging tech-

    niques which emerged in the last decades such as positron emis-

    sion tomography (PET), functional magnetic resonance imagery

    (fMRI), electroencephalography (EEG), magnetoencephalography

    (MEG) and transcranial magnetic stimulation (TMS), entails an ex-

    pected conclusion on the basis of the mirror neuron literature in

    macaque monkey: premotor cortices, originally considered to be

    exclusively concerned with motor control, are also active during

    observation of actions in the absence of any action execution

    (Chaminade and Decety, 2001). What remains unknown is whether

    the same brain region, and a fortiori the same neurons, would be

    activated by the observation and the execution of the same action

    in the whole of the premotor system, or whether this specificity is

    limited to a small percentage of ventral premotor neurons. In other

    words, are all premotor regions activated in response to the obser-

    vation of action populated with mirror neurons? But irrespective of

    the answer to this question, accumulating human neuroimaging

    data does confirm in humans what mirror neurons demonstrated

    beyond doubt in macaque monkeys at the cellular level: neuro-

    physiological bases for the perception of other individuals behav-

    iors makes use of the neurophysiological bases for the control of

    the selfs behavior.

    An intriguing trend in human cognitive research is that this

    resonance is not limited to observation of object-directed hand

    actions, as mirror neurons are, but generalizes to a number of

    other domains of cognition. For example, an fMRI study investi-

    gated touch perception by looking for overlap between beingtouched and observing someone being touched (Keysers and Per-

    rett, 2004). An overlap of activity was found in the secondary

    somatosensory cortex, a brain region involved in integrating

    somatosensory information with other sensory modalities such

    as touch. Another study reported activity in the primary sensory

    cortex during the observation of touch (Blakemore et al., 2005).

    Thus, there is a resonance for touch, by which observation of

    someone else being touched recruits neural underpinnings of

    the feeling of touch. In the same vein, observation of the expres-

    sion of disgust activates a region of the insula also activated dur-

    ing the feeling of disgust caused by a nauseating smell (Wicker

    et al., 2003). Empathy for pain also makes use of resonance in

    the anterior cingulate cortex (Singer et al., 2004). Taken together,

    these findings led to the hypothesis that a generalized resonancebetween oneself and other selves, or social resonance, underlies a

    number of social behaviors including action, such as action

    understanding (Chaminade et al., 2001) and imitation (Rizzolatti

    et al., 2001), but also more generally in the social domain, such

    as empathy and social bonding.

    In summary, the mirror neurons studied in macaque monkey

    provided a very specific example of a more general mechanism

    of human cognition, namely the fact that neuronal structures used

    when we experience a mental state, including but not limited to

    internal representation of an action, are also used when we per-

    ceive other individuals experiencing the same mental state. Recent

    examples support a generalization of motor resonance to other do-

    mains of cognition such as emotions and pain that can be trans-

    ferred between interacting agents, hence the term of socialresonance.

    2.2. Resonance in social interactions

    Motor resonance is evident in behaviors like action contagion

    (contagion of yawning for example), motor priming [the facilita-

    tion of the execution of an action by seeing it done (Edwards

    et al., 2003)] and motor interference [the hindering effect of

    observing incompatible actions during execution of actions (Kilner

    et al., 2003)]. But, does the motor resonance described in a labora-tory environment have a significant impact in everyday life? The

    chameleon effect was introduced to describe the unconscious

    reproduction of postures, mannerisms, facial expressions and

    other behaviors of ones interacting partner (Chartrand and Bargh,

    1999). Subjects unaware of the purpose of the experiment inter-

    acted with an experimenter performing one of two target postures,

    rubbing the face or shaking the foot. Analysis of the behavior

    showed a significant increase of the tendency to engage in the

    same action. This effect can easily be experienced in face-to-face

    interactions, when one crosses his arms or legs to see his partner

    swiftly adopt the same posture. In addition this imitation makes

    the interacting partner more likable even though you are not

    aware of this imitation (Chartrand and Bargh, 1999). This mimicry

    has been described as a source of empathy (Decety and Chaminade,

    2003), so that motor resonance offers a parsimonious system to

    automatically bond with conspecifics.

    The main function classically attributed to resonance is action

    understanding. The most convincing argument to date comes from

    neuropsychology, the study of cognitive impairments consecutive

    to brain lesions. It was recently reported that premotor lesions

    impair the perception of biological motion presented using point-

    light displays (Saygin, 2007). Therefore, not only are premotor cor-

    tices activated during the perception of action, but also their lesion

    impairs the perception of biological motion, demonstrating that

    they are functionally involved in the perception of action.

    Another function frequently associated with resonance is imi-

    tation. Imitation covers a continuum of behaviors ranging from

    simple, automatic and involuntary action contagion to intentional

    imitation and emulation (Byrne et al., 2004). It is extensively usedas a diagnostic tool in the neuropsychology of apraxia. Research

    on the neural bases of imitation supports the intervention of mo-

    tor resonance in several types of imitative behaviors. At the auto-

    matic level, observing an action that shares features with an

    action present in the observers repertoire primes the production

    of the same action (Brass et al., 2000). Using fMRI to investigate

    the neural substrate of this phenomenon, Iacoboni et al. (Iacoboni

    et al., 1999) showed increased activity in the inferior frontal

    gyrus when subjects actions were primed by action observation

    compared to the other conditions of action execution. This region

    involved in human motor priming is putatively the homologue of

    the macaque monkey area F5, where mirror neurons were first

    reported. A study of voluntary imitation aimed at disentangling

    brain representation for the goal of an action and the means toachieve this goal demonstrated an involvement of the inferior

    parietal lobule bilaterally in imitation irrespective of the feature

    of the action being imitated (Chaminade et al., 2002), this brain

    region being in humans the possible homologue of the macaque

    monkey area PF where mirror neurons were also reported (Rizzol-

    atti and Craighero, 2004). The same regions were also active

    when subjects naive in playing the guitar learned to do so by

    observing an expert in another fMRI experiment (Vogt et al.,

    2007). Regions in the inferior parietal lobule and ventral premo-

    tor cortex were more active when subjects observed actions to

    reproduce them later than during action observation without

    instruction to imitate. These results suggest that observed actions

    were internally simulated in order to parse them into elementary

    components to be able to reproduce them later. Altogether theseresults support the engagement of structures involved in motor

    288 T. Chaminade, G. Cheng/ Journal of Physiology - Paris 103 (2009) 286295

    http://-/?-http://-/?-
  • 8/11/2019 Social Cognitive Neuroscience

    4/10

    resonance in increasingly complex form of imitation, from motor

    priming to action imitation to imitative learning.

    3. Resonance applied to humanoid robotics

    Motor resonance is a well-studied phenomenon central to the

    understanding of social behaviors (Decety and Chaminade, 2003).

    The methods that have been developed to investigate it have beenextended to investigate how humans react to anthropomorphic

    artificial agents such as humanoid robots. The underlying assump-

    tion is that the measure of resonance indicates the extent to which

    an artificial agent is considered as a social inter-actor.

    3.1. Behavioral experiments

    In an experimental paradigm developed to investigate motor

    interference, volunteers were asked to raise their fingers in re-

    sponse either to a symbolic cue appearing on a nail or to a move-

    ment of the finger of a hand presented visually (Brass et al., 2000).

    The two cues could be present on the same finger (congruent cues)

    or on different fingers (incongruent cues). In the later case, there

    were two conflicting cues and only one was relevant for the volun-teers. It was found that the observation of an incongruent finger

    movement hindered the response to the symbolic cue i.e. in-

    creased the time needed to respond but that the reverse effect

    i.e. the symbolic cue hindering the response to the finger move-

    ment- was very small. In other word, when responding to a sym-

    bolic cue, the response is hindered by the observation of an

    incompatible action and facilitated by a compatible one. In this

    paradigm, producing an action similar to an observed action is a

    prepotent response that requires to be inhibited to execute the cor-

    rect response. To summarize, as a consequence of motor resonance,

    perception of another individuals actions influences the execution

    of actions by the self: observing an action facilitates the execution

    of the same action (motor priming), and hinders the execution of a

    different action (motor interference). These behavioral effects canbe investigated experimentally to provide objective measures of

    the magnitude of motor resonance depending on the nature of

    the agents.

    3.1.1. Motor priming with a robotic hand

    Motor priming can be conceptually conceived as a form of

    automatic imitation consequential of motor resonance. In other

    words, observing an action facilitates (primes) the execution of

    the same action. In experimental terms, responses that are primed

    by observation are faster and more accurate. This effect was inves-

    tigated with two actions, hand opening and hand closing, in re-

    sponse to the observation of a hand opening and closing, with

    the hand being either a realistic human hand or a simple robotic

    hand having the appearance of an articulated claw with two oppo-site fingers (Press et al., 2005). Volunteers in the experiment were

    required to make a prespecified response (to open or to close their

    right hand) as soon as a stimulus appears on the screen. Response

    time was recorded and analyzed as a function of the content of the

    stimulus, either a human or a robotic hand, in a posture congruent

    or incongruent with the prespecified movement (e.g. open or

    closed hand when the prespecified action is opening the hand).

    Results showed an increased response time in incongruent

    compared to congruent conditions, in response to both human

    and robotic hand, suggesting that the motor priming effect was

    not restricted to human stimuli but generalized to robotic stimuli

    (Press et al., 2005). As with the motor interference measure, the

    size of the effect, taking the form of the time difference between

    response to incongruent and congruent stimuli, was larger for hu-man stimuli (30 ms) that for robotic stimuli (15 ms).

    A follow-up experiment tested whether the effect is better ex-

    plained by a bottom-up process due to the overall shape or a

    top-down process caused by the knowledge of the intentionality

    of humans compared to robotic devices (Press et al., 2006). Human

    hands were modified by the additionof a metal and wire wrist, and

    were perceived as less intentional than the original hands. Never-

    theless in the priming experiment, no significant differences were

    found between the priming effect of the original and of the robot-ized human hand, in favor of the bottom-up hypothesis that the

    overall hand shape, and not its description as a human or robotic

    hand, affects the priming effect.

    3.1.2. Motor interference with humanoid robot DB

    We investigated motor resonance elicited by the humanoid ro-

    bot DB. DB is a 30 degrees-of-freedom (hydraulic actuators) hu-

    man-size (1.85 m) anthropomorphic robot with legs, arms,

    fingerless hands, a head and a jointed torso. These human-like fea-

    tures were central to the experiment described in details now.

    This series of experiments (Chaminade et al., 2005; Oztop et al.,

    2005b), was initiated by Kilner et al.s (Kilner et al., 2003) study of

    motor interference when facinga real human being or an industrial

    robotic arm. Volunteers in this study produced a vertical or hori-

    zontal arm movement while watching another agent in front of

    them producing a spatially congruent (i.e. vertical when vertical,

    horizontal when horizontal) or a spatially incongruent (horizontal

    when vertical and vertical when horizontal) movement. The inter-

    ference effect, measured by the increase of the variance in the

    movement, was found when volunteerswatched an arm movement

    spatially incompatible with the one they were producing e.g.

    vertical versus horizontal,Fig. 2(Kilner et al., 2003). Interestingly,

    Kilner et al.s study did not find any interference effect using an

    industrial robotic arm moving at a constant velocity, suggesting

    at first that motor interference was specific to interactions between

    human agents.

    The original experimental paradigm was adapted to investigate

    how humanoid robots interfere with humans (Fig. 2). In these

    experiments, subjects performed rhythmic arm movements whileobserving either a human agent or the humanoid robot DB stand-

    ing approximately 2 m away from them performing either congru-

    ent or incongruent 0.5 Hz rhythmic arm movements. The robot

    was programmed to track the end point Cartesian trajectories of

    rhythmic top-left to bottom-right and top-right to bottom-left

    reaching movements involving elbow, shoulder and some torso

    movements by commanding the right arm and the torso joints of

    the robot. The experimenter listened to a 1 Hz beep on headphones

    to keep its beat constant. Subjects were instructed to be in phase

    with the other agents movements. During each 30-s trial, the kine-

    matics of the endpoint of the subjects right index finger was re-

    corded with a motion capture device. The variance of the

    executed movements was used as a measure of motor interference

    caused by the observed action. Briefly, each individual movementwas segmented from the surrounding movements by the identifi-

    cation of endpoints using 3D curvature. Trajectories were projected

    onto a vertical and a horizontal planes. The signed area of each

    movement is defined as the deviation from the straight-line joining

    the start and end of each segmented movement. The variance of

    this signed area within a trial provides an estimate of the amount

    by which this curvature changes between individual movements.

    The variance was divided by the mean absolute signed area during

    this trial to normalize the data.

    In a first experiment (Oztop et al., 2005b), trajectories were de-

    rived from captured human motion of the same movements per-

    formed by the human control for the experiment. We found

    (Fig. 2) that in contrast to the industrial robotic arm, the humanoid

    robot executing movements based on motion captured data causeda significant change of the variance of the movement depending on

    T. Chaminade, G. Cheng/ Journal of Physiology - Paris 103 (2009) 286295 289

    http://-/?-http://-/?-
  • 8/11/2019 Social Cognitive Neuroscience

    5/10

    congruency (Oztop et al., 2005b). The ratio between the variance in

    the incongruent and in the congruent conditions increases from

    the industrial robotic arm (r= 1, no increase in incongruent condi-

    tion, as reported inKilner et al. (2003)and the human (r 2), both

    in ours and in Kilner et al.s study. The new result was that an

    humanoid robot triggers an interference effect but weaker than a

    human (r 1.5).

    In a follow-up experiment, we investigated the effect of the

    movement kinematics on the interference. The humanoid robot

    moved either with a biological motion based, as previously, on re-

    corded trajectories, or with an artificial motion implemented by a

    1-DOF sinusoidal movement of the elbow. We found a significanteffect of the factors defining the experimental conditions. The in-

    crease in incongruent conditions was only significant when the ro-

    bot movements followed biological motion (Chaminade et al.,

    2005). A similar trend for artificial motion was not significant.

    The ratio that could be calculated on the basis of the results was,

    in the case of biological motion, comparable to the ratio reported

    in the previous experiment, 1.3. Note the importance of having

    internal controls, in this case human agents, to compare the ratio

    within groups.

    A final experiment assessed whether seeing the full body or

    only body parts of the other agent influences motor resonance

    (Chaminade, Franklin, Oztop and Cheng, unpublished results).

    The effect of interference could be due merely to the appearanceof the agent, which would predict a linear increase of the ratio

    Fig. 2. Top: factorial plan showing the four canonical condition of motor interference experiment: horizontally, the spatial congruency between the volunteers and the tested

    agent movement; vertically, the human control and the agent being tested, in this case the humanoid robot DB. Bottom: summary of the results from the three experiments

    described in the text. Bars represent the ratio between the variance for incongruent and congruent movements (error: standard error of the mean). Effect of appearance:

    results are given for three agents, an industrial robot on the left (Kilner et al., 2003), a humanoid robot with biological motion at the center (Oztop et al., 2005b) and a human

    on the right (Kilner et al., 2003; Oztop et al., 2005b). Effect of the motion: the humanoid robot DB displays artificial (ART) or biological (BIO) motion (Chaminade et al., 2005).

    Effect of visibility the humanoid robot displays artificial (ART) or biological (BIO) motion while its body is visible or hidden by a cloth (unpublished observations).

    290 T. Chaminade, G. Cheng/ Journal of Physiology - Paris 103 (2009) 286295

  • 8/11/2019 Social Cognitive Neuroscience

    6/10

    between the variance for incongruent and congruent movements

    with anthropomorphism. Alternatively it could be influenced by

    the knowledge we have about the nature of the other agent. Cur-

    rent knowledge on motor resonance, as well as the previous re-

    sults, including reproducing the doubling of variance in ours and

    Kilner et al.s (Kilner et al., 2003) experiment, favors the former

    hypothesis of a purely bottom-up (i.e. perceptual and automatic)

    process. To test whether appearance was the main factor, we cov-

    ered the body and face of both agents, the human and the human-

    oid robot, with a black cloth leaving just the moving arm visible,

    and compared the results of the interference paradigm between

    covered and uncovered agents. Preliminary results indicate that

    the variance is only increased when the body is visible, implying

    that motor interference cannot be measured in the absence of body

    visibility. This suggests that arm movements, from either a human

    or a humanoid robot, do not provide sufficient cues about the nat-

    ure of the agent being interacted with to elicit motor resonance

    (bottom-up effect of the stimulus). Also, knowledge about the as-

    pect of the agent being interacted with is not sufficient to elicit

    motor resonance (top-down effect of the knowledge). These results

    confirm the conclusions of he motor priming experiment described

    previously, in favor of a bottom-up effect due to the appearance of

    the robotic device.

    Overall, these accumulating results confirm the validity of using

    motor interference and motor priming as metrics of motor reso-

    nance, a possible proxy for social competence, with humanoid ro-

    bots. First, motor resonance is an important aspect of social

    cognition, particularly important in automatic and unconscious

    perception of other agents. Second, the effects of motor resonance

    on behavior can be measured objectively, as movement variance or

    reaction time. Third, existing results strongly suggest the effect is

    modulated by the appearance of the agent being tested. And final-

    ly, these interference effects have been shown to increase with the

    realism of the stimulus.

    3.2. Neuroimaging experiments

    Motor resonance has been extensively studied with neuroimag-

    ing in humans, and it is possible to adapt similar approaches to the

    perception of anthropomorphic robots.

    3.2.1. Neuroimaging of grasping movements with a robotic hand

    Neuroimaging experiments comparing the observation of hu-

    mans versus robots have so far yielded mixed results. In a PET

    study, subjects were presented with grasping action performed

    by a human or by a robotic arm. The authors report that the left

    ventral premotor activity found in previous experiments of ac-

    tion observation responded to human, but not robot, actions

    (Tai et al., 2004). However, results of a recent fMRI study indi-

    cate that a robotic arm and hand elicits motor resonance, in

    the form of increased activity in regions activated by the execu-tion of actions during the observation of object-directed actions

    compared to simple movements (Gazzola et al., 2007). Further-

    more, the trend is of an increased activity in response to robot

    compared to human stimuli, though this increase is not reported

    as significant. How can we reconcile these two sets of results?

    One possibility, the difference techniques used in these experi-

    ments, PET and fMRI, cannot explain the dramatic reversal of

    the results. Another possibility derives from differences in

    anthropomorphism of the robotic arm and hand used by the

    two groups, but in the absence of figure representing the robotic

    arm used in Tai et al., it is difficult to draw conclusions. It is en-

    ough to acknowledge here that according to both reports, the ro-

    botic arms and hands and their motions were not attempting to

    be realistic. The interpretation proposed by Gazzola et al., aboutthe repetition of stimuli reducing activity in these areas also

    seems questionable, as both robot and human stimuli underwent

    the same procedure in Tai et al. procedures. Note that the ab-

    sence of motor interference when the body hidden, reported in

    the previous section, is not relevant to understand the present

    data as stimuli consisted of object-directed actions in both

    experiments, in contrast to the meaningless arm movements

    used in the interference experiments.

    Another source of discrepancy between the two studies comesfrom the experimental instructions. Indeed, instructions can have

    significant effects on the brain structures involved in a given cog-

    nitive task. This has been clearly shown in fMRI studies in which

    subjects interacted with a similar random program but were pre-

    sented their partner as varying in anthropomorphism (Krach

    et al., 2008). Regions involved in mentalizing were more active

    when subjects believed they were interacting with the human

    compared to a unintentional, artificial agent. This highlights the

    importance of the experimental setting, in particular when using

    artificial agents. While it is the robot embodiment that is manip-

    ulated in both Tai et al. and Gazzola et al. studies, their instruc-

    tions do differ. In the first report, subjects were instructed to

    carefully observe the human (experimenter) or the robot model,

    while in the second, subjects were instructed to watch the mov-

    ies carefully, paying particular attention to the relationship be-

    tween the agents and the objects. Well propose in the next

    part that differences between these instructions, in particular

    the focus on the goal of the actions, can explain discrepancies

    in the results.

    3.2.2. Neuroimaging of a humanoid robots actions

    The preliminary results partially presented here derive from an

    international collaboration (Thierry Chaminade, Sarah-Jayne

    Blakemore, Chris D. Frith from UCL, UK; Massimiliano Zecca, Silves-

    tro Micera, Paolo Dario, Atsuo Takanishi from RoboCasa, Japan,

    Giacomo Rizzolatti, Vittorio Gallese, Maria Alessandra Umilt from

    Universit di Parma, Italy; manuscript in preparation) aimed at

    investigating the involvement of motor resonance during the

    observation of a humanoid robot. Using fMRI, local brain activitywas recorded when participants observed video clips of human

    and humanoid robot facial expressions of emotions, while partici-

    pants rated the emotion (how much emotion in the video, expli-

    cit task) or the movement (how much motion in the video,

    implicit task). The humanoid robot used for this experiment, WE-

    4RII, has 59 degrees of freedom (DOFs), 26 of which were specifi-

    cally used for controlling the facial expression executed in this

    experiment plus 5 DOFs in the shoulders, important for squaring

    or shrugging gestures used in the expression of emotions. A subset

    of the facial Action Units (AU, described in Ekman and Friesen,

    1978) was chosen for a simplified but realistic reproduction of

    the facial expression of emotions used in this experiment (Itoh

    et al., 2004).

    We were particularly interested in activity in the left ventralpremotor cortex, a region involved in motor resonance that was

    found in the main effect of action observation. There was a sig-

    nificant interaction between the subjects task (implicit or expli-

    cit) and the agent used to display the stimulus (human or robot).

    Fig. 3illustrates the source of this effect. The signal increased be-

    tween the implicit and explicit tasks was mainly driven by the

    robot. This increased response to robot stimuli when subjects

    rated the emotionality of the stimulus supports a modulation of

    the motor resonance systems response to the humanoid robot

    by the task. Our interpretation is grounded on the postulate that

    one function of motor resonance processes taking place in infe-

    rior frontal cortices is to extract automatically the goal from ob-

    served human actions (Rizzolatti and Craighero, 2004). Bottom-

    up processes would then be automatic when perceiving humanstimuli, and would show little to no modulation by the task, as

    T. Chaminade, G. Cheng/ Journal of Physiology - Paris 103 (2009) 286295 291

  • 8/11/2019 Social Cognitive Neuroscience

    7/10

    is the case here in response to human stimuli. In contrast, robot

    stimuli would not be processed automatically because the system

    has no existing representation of robots actions as is the case

    when subjects rated the movement (implicit task). The large in-

    crease of activity in the left inferior frontal cortex during presen-

    tation of robot stimuli when the task is to explicitly judge

    emotion can be understood as forcing the perceptual system to

    process robot stimuli as goal-directed, anthropomorphic, actions:

    when the task is to explicitly rate the emotion, the subjects

    attention is directed towards the goal of the action, the emotion.

    The interaction between task and agent would thus derive from

    an interaction between bottom-up processes, influenced by the

    nature of the agent (automatic for human, not robot), and top-

    down processes, depending on the object of attention. If this

    interpretation is correct, motor resonance towards artificialagents would be enhanced when the agents actions are explicitly

    processed as actions, and not mechanical movements, by the

    perceiver.

    This finding offers an interesting solution to the issue raised in

    the previous section: when asking subjects to pay particular

    attention to the relationship between the agents and the objects,

    Gazzola et al. oriented their subjects attention to process the ro-

    bots movement as transitive goal-directed actions, hence reinforc-

    ing a top-down activation of motor resonance. In contrast, Tai

    et al.s instructions to carefully observe the agent did not impose

    focusing the attention on the goal of the action, hence relying

    exclusively on bottom-up processes to activate motor resonance,

    that is reduced towards humanoid robots. An important conclusion

    with regards to the social competence of humanoid robots there-fore relates to the way they are perceived, either as a mechanical

    devices or as goal-directed agents, that would be influenced by

    the expectations of the observer.

    4. Resonance and humanoid robots design

    While robots appear to be pertinent to investigate motor reso-

    nance, the last part of this review focuses on the complementary

    question: can social cognitive neuroscience, and in the present fo-

    cus, the concept of resonance, be used to enhance the social com-

    petence of humanoid robots? While complete achievements are

    scarce, two lines of investigation are described here: can we build

    resonating robots, and could the uncanny valley hypothesis beexplained by the concept of resonance.

    4.1. Robots resonating with humans

    Artificial anthropomorphic agents such as humanoid and an-

    droid robots are increasingly present in our societies, and every-

    day use of robots is becoming accessible, as with the example of

    Kokoros company simroid, a feeling and responsive android pa-

    tient for use as a training tool for dentists, or robotic companions

    being introduced for use with children (Tanaka et al., 2007) or el-

    derly people. For these robots to interact optimally with humans,

    it is important to understand humans reactions to these artificial

    agents in order to optimize their design. Studies have addressed

    the issue of the form (DiSalvo et al., 2002) and functionalities

    (Breazeal and Scassellati, 2000; Kozima and Yano, 2001) a

    humanoid robot should have in order to be socially accepted.

    Both types of approaches have mostly relied on subjectiveassumptions, such as the need for human traits. It was thus pro-

    posed that the design of consumer product humanoids should

    balance human-ness, facilitating social interaction, with an

    amount of robot-ness so that the observer does not develop false

    expectations about the robots emotional capabilities, and prod-

    uct-ness so that the user feels comfortable using the robot, and

    provided guidelines on how to achieve this balance (DiSalvo

    et al., 2002). For example, the face should be wider than tall to

    look less anthropomorphic, but have a nose, a mouth, eyes and

    eyelids.

    But anthropomorphism is not limited to the robots appear-

    ance and motion: interactive robots behaviors also matter for

    interacting with humans. Robothuman interactions are mas-

    sively unidirectional at present. As increasingly complex andautonomous humanoid platforms become available, we believe

    that including human-like motor resonance in their behavior

    would significantly improve the social competence of their inter-

    actions. We recently demonstrated the feasibility of such an ap-

    proach (Chaminade et al., 2008; Oztop et al., 2005a). Our

    hypothesis was that synchronized sensory feed-back of executed

    actions could drive Hebbian learning in associative brain net-

    works, forming motor resonance networks from which contagion

    of behaviors could emerge. This scenario was inspired by the the-

    oretical proposal that motor resonance networks can result from

    Hebbian learning of associations between visual and motor repre-

    sentations of actions (Keysers and Perrett, 2004), as well as devel-

    opmental psychology observations that synchronized action and

    sensory feedback are available to neonate during motor babblingwith their hands (Heyes, 2001).

    Fig. 3. Top: location of the left ventral premotor cluster in which brain activity was analyzed. Bottom: graphs presenting brain activity in response to human (white) and

    robot (grey) agents presented on the right depending on the task (error bar represent standard error of the mean). Note the larger increase between implicit and explicit for

    robot than for human stimuli.

    292 T. Chaminade, G. Cheng/ Journal of Physiology - Paris 103 (2009) 286295

  • 8/11/2019 Social Cognitive Neuroscience

    8/10

    In our system, a simple associative network linked a robotic

    hand and a simple visual system consisting of a camera. During a

    training phase, the network was fed simultaneously by the motor

    commands sent to the robotic hand to perform gestures and by

    the visual feedback of the robotic hand. During a testing phase,

    the system was presented with the same or new hand postures,

    or with hand postures from a human agent. Our results indicated

    that some features of human behaviors, such as the ability to per-

    form new actions (i.e. not present in the repertoire formed by

    training) by imitation, can emerge from this connectionist associa-

    tive network (Chaminade et al., 2008). Similar results were ob-

    tained with a non-anthropomorphic robotic arm (Nadel et al.,

    2004). As is the case for behaviors derived from motor resonance,

    this imitation is unconscious, in the sense that the system has

    not been designed in order to imitate, but to reproduce the onto-

    genic origin of resonance system for testing whether this reproduc-

    tion is sufficient to bootstrap a key behavior making use of motor

    resonance, imitation.

    Building on this proof-of-concept, a similar associative learn-

    ing could be used with humanoid robots to develop a realistic

    architecture for full-body motor resonance abilities at the core of

    the robotic platform, akin to providing the robot with a sensorimo-

    tor body schema. This architecture could subtend realistic human

    behaviors. For instance, studies of natural interaction between hu-

    mans have demonstrated that as a consequence of motor reso-

    nance, interacting agents align their behaviors (Schmidt and

    Richardson, 2008): two persons walking together in the street syn-

    chronize their step frequency unconsciously (Courtine and Schiep-

    pati, 2003), and crowds applause synchronously when one starts

    clapping at the end of a show (Neda et al., 2000). As bi-directional-

    ity is a hallmark of social interactions, implementing bidirectional

    coordination of behaviors in humanoid robots by incorporating a

    motor resonance framework to the platform may lead to dramatic

    improvements of their social abilities, though such a conclusion

    awaits demonstration.

    4.2. Motor resonance and the uncanny valley

    The Uncanny Valley of eeriness hypothesis has served for

    years as a guideline to avoid realistic anthropomorphism in robotic

    designs for commercial usage. This hypothesis postulates that arti-

    ficial agents imperfectly attempting to impersonate humans in-

    duce a negative emotional response (MacDorman and Ishiguro,

    2006; Mori, 1970). As Toshitada Doi, an official representative

    commenting the design of Sonys humanoid robot QRIO, explained

    We suggested the idea of an eight year-old space life form to the de-

    signer we did not want to make it too similar to a human. In the

    background, as well, lay an idea passed down from the man whose

    work forms the foundation of the Japanese robot industry, Masahiro

    Mori: the valley of eeriness. If your design is too close to human

    form, at a certain point it becomes just too . . .

    uncanny. So, whilewe created QRIO in a human image, we also wanted to give it little

    bit of a spacemanfeel. Nowadays though, people like David Han-

    son, founder of Hanson robotics, builds realistic anthropomorphic

    robots under the assumption that the uncanny valley is an illusion

    caused by the poor quality of aesthetic designs (Hanson, 2005), not

    an insurmountable limit.

    A speculative explanation for the uncanny valley hypothesis

    could be derived from the motor resonance framework described

    here. Results from the previous section support the hypothesis

    that the neural network subtending resonance results from Heb-

    bian learning of associations between visual and motor represen-

    tations of actions (Chaminade et al., 2008; Keysers and Perrett,

    2004). The simultaneous experience of doing an action and of

    perceiving an action during human development is responsiblefor establishing resonance networks, that are in turn used to imi-

    tate and understand others actions. The actual processes engaged

    when we understand a perceived action can be described as a

    competition between various representations of action. Selection

    of one representation among many would rely on reducing iter-

    atively an error term (called a prediction error) between accumu-

    lating evidence about a perceived action and the predictions

    derived from competing existing representations of actions in

    the resonance network. Lets speculate about the balance be-tween the strength of the representation of an action and the

    prediction error when perceiving the same action performed by

    a human, a humanoid and an android robot. As the resonance

    networkhas been trained by the observation of human actions,

    the internal representation of the observed action will be se-

    lected by the reduction of the prediction error in the case of hu-

    man agent.

    We proposed previously that bottom-up processes are re-

    duced in the case of humanoids, so that the driving inputs to

    the resonance system are less strong than in the case of humans.

    Top-down control of resonance to interpret the robots move-

    ments as action provides a larger error as a consequence of the

    mechanical (i.e. not human) appearance, as suggested by fMRI:

    when subjects have to process actions explicitly, in Gazzola

    et al. as well as in the results described in Section 3.1.2, we ob-

    serve a trend towards increased activity in response to robots

    compared to human actions. In the case of contemporary an-

    droids, the realistic human appearance triggers bottom-up pro-

    cesses as for real humans. But while the stimulus does select

    the representation of the correct action, the motor resonance sys-

    tem is unable to match its predictions with the incoming infor-

    mation because of androids imperfections in form and/or

    motion. The ensuing large prediction error signal could give rise

    to the feeling of eeriness: the realistic albeit imperfect imperson-

    ation of a human acting does not fit existing representations of

    human actions.

    We have preliminary fMRI data supporting this interpretation.

    In an international collaboration to investigate the perception of

    androids (Ayse Saygin, Thierry Chaminade, Chris Frith and JonDriver from UCL, UK; Hiroshi Ishiguro and colleagues, Osaka Uni-

    versity, Japan; manuscript in preparation), we recorded brain re-

    sponses to the perception of actions performed by the android

    Repliee Q2, by the human after whom the android was devel-

    oped, and by a humanoid robot obtained by removing the an-

    droid cover skin. Repetition priming was used to isolate

    regions specifically responding to each agents actions. The main

    results come from comparing repetition priming results for the

    three agents: while the human and robot activate a limited

    number of circumscribed regions, the effect for the android is

    much larger and widespread across the cortex, as expected from

    an error signal: the inability to minimize the error recruits

    numerous cognitive processes in order to make sense of the

    input.In this interpretation of the uncanny valley, as we get closer to

    human appearance, the perceptual system, tuned by design for rec-

    ognizing human actions, becomes particular to the tiniest flaws in

    the android form and motion. Such a view comforts David Hanson

    argument: the uncanny valley is mainly a result of poor aesthetic

    design. This is a particularly important line of development in

    the perspective of the design of companion robots. By extension,

    it is likely that a similar uncanniness will result from to imper-

    fect social behaviors of anthropomorphic robots. For example, the

    addition of random micro-behaviors in Hiroshi Ishiguros latest an-

    droid has proven beneficial to avoid it falling into the uncanny val-

    ley (Minato et al., 2004). Thus, our proposal to design

    anthropomorphic social behaviors based on motor resonance can

    participate to making robots interacting with humans moreacceptable.

    T. Chaminade, G. Cheng/ Journal of Physiology - Paris 103 (2009) 286295 293

  • 8/11/2019 Social Cognitive Neuroscience

    9/10

    5. Conclusions

    The fields of humanoid robotics and of social cognition can both

    benefit from mutual exchanges. Robots provide tools to investigate

    parameters modulating both behavioral and neural markers of mo-

    tor resonance. Using the humanoid robot DB, we have shown that

    human-like appearance and motion is sufficient to elicit motor res-

    onance. Investigating the brain response to the emotion-express-ing robotic upper torso WE-4RII, weve proposed that while

    resonance is primarily a perceptual (i.e. automatic) process when

    perceiving humans, it may be more susceptible to the attention

    of the observer when perceiving robots. This result could be useful

    to frame users expectations and increase robots acceptability. Fi-

    nally, weve shown that resonance could inspire epigenetic robot-

    ics, in particular the implementation of a body schema. These

    reciprocal influences between social cognitive neuroscience and

    humanoid robotics thus promise a better understanding of man

    robot interactions that will ultimately lead to increasing the social

    acceptance of future robotic companions.

    References

    Arkin, R.C., 1998. Behavior-Based Robotics. MIT Press, Cambridge, MA.

    Asada, M., MacDorman, K.F., Ishiguro, H., Kuniyoshi, Y., 2001. Cognitive

    developmental robotics as a new paradigm for the design of humanoid

    robots. Robot. Auton. Syst. 37, 185193.

    Atkeson, C.G., Hale, J.G., Pollick, F., Riley, M., Kotosaka, S., Schaal, S., Shibata, T.,

    Tevatia, G., Ude, A., Vijayakumar, S., Kawato, M., 2000. Using humanoid robots

    to study human behavior. IEEE Intell. Syst. 15, 4656.

    Blakemore, S.J., Decety, J., 2001. From the perception of action to the understanding

    of intention. Nat. Rev. Neurosci. 2, 561567.

    Blakemore, S.J., Bristow, D., Bird, G., Frith, C., Ward, J., 2005. Somatosensory

    activations during the observation of touch and a case of vision-touch

    synaesthesia. Brain 128, 15711583.

    Brass, M., Bekkering, H., Wohlschlager, A., Prinz, W., 2000. Compatibility between

    observed and executed finger movements: comparing symbolic, spatial, and

    imitative cues. Brain Cogn. 44, 124143.

    Breazeal, C., Scassellati, B., 2000. Infant-like social interactions between a robot and

    a human caretaker. Adapt. Behav. 8, 4974.

    Brooks, R.A., 1997. The cog project. Advanced robotics. J. Robot. Soc. Jpn. 15, 968

    970.

    Byrne, R.W., Barnard,P.J.,Davidson,I., Janik, V.M., McGrew,W.C.,Miklosi, A.,Wiessner,

    P., 2004. Understanding culture across species. Trends Cogn. Sci. 8, 341346.

    Chaminade, T., 2006. Acquiring and probing self-other equivalencies using

    artificial agents to study social cognition. Paper Presented at: 15th IEEE

    International Symposium on Robot and Human Interactive Communication

    (ROMAN), Reading, UK.

    Chaminade, T., Decety, J., 2001. A common framework for perception and action:

    neuroimaging evidence. Behav. Brain Sci. 24, 879882.

    Chaminade, T., Meary, D., Orliaguet, J.P., Decety, J., 2001. Is perceptual anticipation a

    motor simulation? A PET study. NeuroReport 12, 36693674.

    Chaminade,T., Meltzoff, A.N., Decety, J., 2002. Does theend justify themeans? A PET

    exploration of the mechanisms involved in human imitation. Neuroimage 15,

    318328.

    Chaminade, T., Franklin, D., Oztop, E., Cheng, G., 2005. Motor interference between

    humans and humanoid robots: effect of biological and artifical motion. Paper

    Presented at: International Conference on Development and Learning, Osaka,

    Japan.

    Chaminade, T., Hodgins, J., Kawato, M., 2007. Anthropomorphism influencesperception of computer-animated characters actions. Soc. Cogn. Affect

    Neurosci. 2, 206216.

    Chaminade, T., Oztop, E., Cheng, G., Kawato, M., 2008. From self-observation to

    imitation:visuomotorassociationon a robotichand. BrainRes. Bull. 75,775784.

    Chartrand, T.L., Bargh, J.A., 1999. The chameleon effect: the perception-behavior

    link and social interaction. J. Pers. Soc. Psychol. 76, 893910.

    Cheng, G., Hyon, S.-H., Morimoto, J., Ude, A., Hale, J., Colvin, G., Scroggin, W.,

    Jacobsen, S., 2007. CB: a humanoid research platform for exploring

    neuroscience. J. Adv. Robot. 21, 10971114.

    Courtine, G., Schieppati, M., 2003. Human walking along a curved path. I. Body

    trajectory, segment orientation and the effect of vision. Eur. J. Neurosci. 18,

    177190.

    Decety, J., Chaminade, T., 2003. When the self represents the other: a new cognitive

    neuroscience view on psychological identification. Conscious. Cogn. 12, 577

    596.

    di Pellegrino, G.,Fadiga, L., Fogassi, L., Gallese, V.,Rizzolatti, G.,1992. Understanding

    motor events: a neurophysiological study. Exp. Brain Res. 9, 176180.

    DiSalvo, C., Gemperle, F., Forlizzi, J., Kiesler, S., 2002. All robots are not created

    equal: the design and perception of humanoid robot heads. Paper Presented at:4th Conference on Designing Interactive Systems, London, UK.

    Edwards, M.G., Humphreys, G.W., Castiello, U., 2003. Motor facilitation following

    action observation: a behavioural study in prehensile action. Brain Cogn. 53,

    495502.

    Ekman, P., Friesen, W.V., 1978. Facial Action Coding System: A Technique for the

    Measurement of Facial Movement. Consulting Psychologists Press, PaloAlto, CA.

    Gallese, V., Fadiga, L., Fogassi, L., Rizzolatti, G., 1996. Action recognition in the

    premotor cortex. Brain 119 (Pt 2), 593609.

    Gallese, V., Keysers, C., Rizzolatti, G., 2004. A unifying view of the basis of social

    cognition. Trends Cogn. Sci. 8, 396403.

    Gazzola, V., Rizzolatti, G., Wicker, B., Keysers, C., 2007. The anthropomorphic brain:

    the mirror neuron system responds to human and robotic actions. Neuroimage35, 16741684.

    Hanson, D., 2005. Expanding the aesthetics possibilities for humanlike robots. Paper

    Presented at: Proc. IEEE Humanoid Robotics Conference, special session on the

    Uncanny Valley, Tsukuba, Japan.

    Harnad, S., 1989. Minds, machines and searle. J. Exp. Theor. Artif. Intell. 1, 525.

    Heyes, C., 2001. Causes and consequences of imitation. Trends Cogn. Sci. 5, 253

    261.

    Hirai, K., Hirose, M., Haikawa, Y., Takenaka, T., 1998. The development of Honda

    humanoid robot. Paper Presented at: IEEE International Conference on Robotics

    and Automation, Leuven, Belgium.

    Hirukawa, Kanehiro, Kaneko, K., Kajita, S., Fujiwara, K., Kawai, Y., Tomita, F., Hirai,

    Tanie, K., Isozumi, T., et al., 2004. Humanoid robotics platforms developed in

    HRP. Robot. Auton. Syst. 48, 165175.

    Hollerbach, J.M., Jacobsen, S.C., 1996. Anthropomorphic robots and human

    interactions. Paper Presented at: First International Symposium on Humanoid

    Robots, Waseda, Japan.

    Iacoboni, M., Woods, R.P., Brass, M., Bekkering, H., Mazziotta, J.C., Rizzolatti, G.,

    1999. Cortical mechanisms of human imitation. Science 286, 25262528.

    Itoh, K., Miwa, H., Matsumoto, M., Zecca, M., Takanobu, H., Roccella, S., Carrozza,

    M.C., Dario, P., Takanishi, A., 2004. Various emotional expressions with emotion

    expression humanoid robot WE-4RII. Paper Presented at: First IEEE Technical

    Exhibition Based Conference on Robotics and Automation (TExCRA 04), Tokyo.

    Kawato, M., 2008. From understanding the brain by creating the brain towards

    manipulative neuroscience. Philos. Trans. Roy. Soc. Lond. B Biol. Sci. 363, 2201

    2214.

    Keysers, C., Perrett, D.I., 2004. Demystifying social cognition: a Hebbian perspective.

    Trends Cogn. Sci. 8, 501507.

    Kilner, J.M., Paulignan, Y., Blakemore, S.J., 2003. An interference effect of observed

    biological movement on action. Curr. Biol. 13, 522525.

    Kozima, H., Yano, H., 2001. A robot that learns to communicate with human

    caregivers. Paper Presented at: First International Workshop on Epigenetic and

    Robotics, Lund, Sweden.

    Krach, S.R., Hegel, F., Wrede, B., Sagerer, G., Binkofski, F., Kircher, T., 2008. Can

    machines think? interaction and perspective taking withrobots investigated via

    fMRI. PLoS ONE 3, e2597.

    Liberman, A.M., Mattingly, I.G., 1985. The motor theory of speech perception

    revised. Cognition 21, 136.MacDorman, K.F., Ishiguro, H., 2006. The uncanny advantage of using androids in

    cognitive and social science research. Interact Stud 7.

    Minato, T., Shimada, M., Ishiguro, H., Itakura, S., 2004. Development of an android

    robot for studying humanrobot interaction. In: Innovations in Applied

    Artificial Intelligence. Springer, Berlin. pp. 424434.

    Mori, M., 1970. The valley of eeriness (Japanese). Energy 7, 3335.

    Nadel, J., Revel, A., Andry, P., Gaussier, P., 2004. Toward communication: first

    imitations in infants, low-functioning children withautism and robots. Interact.

    Stud. 5, 4574.

    Nagasaka, K., Kuroki, Y., Suzuki, S., Itoh, Y., Yamaguchi, J., 2004. Integrated motion

    control for walking, jumping and running on a small bipedal entertainment

    robot. Paper Presented at: IEEE Int. Conf. on Robotics and Automation, New

    Orleans, LA.

    Neda, Z., Ravasz, E., Brechet, Y., Vicsek, T., Barabasi, A.L., 2000. The sound of many

    hands clapping. Nature 403, 849850.

    Oztop, E., Chaminade, T., Cheng, G., Kawato, M., 2005a. Imitation bootstrapping:

    experiments on a robotic hand. Paper Presented at: 5th IEEE-RAS International

    Conference on Humanoid Robots, Osaka, Japan.

    Oztop, E., Franklin, D., Chaminade, T., Cheng, G., 2005b. Humanhumanoidinteraction: is a humanoid robot perceived as a human. Int. J. Humanoid

    Robot. 2, 537559.

    Press, C., Bird, G., Flach, R., Heyes, C., 2005. Robotic movement elicits automatic

    imitation. Brain Res. Cogn. Brain Res. 25, 632640.

    Press, C., Gillmeister, H., Heyes, C., 2006. Bottom-up, not top-down, modulation of

    imitation by human and robotic models. Eur. J. Neurosci. 24, 24152419.

    Rizzolatti, G., Craighero, L., 2004. The mirror-neuron system. Annu. Rev. Neurosci.

    27, 169192.

    Rizzolatti, G., Fogassi, L., Gallese, V., 2001. Neurophysiological mechanisms under-

    lying the understanding and imitation of action. Nat. Rev. Neurosci. 2, 661670.

    Rothi, L.J.G., Ochipa, C., Heilman, K.M., 1991. A cognitive neuropsychological model

    of limb praxis. Cogn. Neuropsychol. 8, 443458.

    Sakagami, Y.,Watanabe, R.,Aoyama, C.,Matsunaga, S., Higaki, N.,Fujimura, K.,2002.

    The intelligent ASIMO: system overview and integration. Paper Presented at:

    IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne,

    Switzerland.

    Sandini, G., Metta, G., and Vernon, D., 2004. RobotCub: an open framework for

    research in embodied cognition. Paper Presented at: IEEE InternationalConference on Humanoid Robots, Los Angeles, CA.

    294 T. Chaminade, G. Cheng/ Journal of Physiology - Paris 103 (2009) 286295

  • 8/11/2019 Social Cognitive Neuroscience

    10/10

    Saygin, A.P., 2007. Superior temporal and premotor brain areas necessary for

    biological motion perception. Brain 130, 24522461.

    Schmidt, R.C., Richardson, M.J., 2008. Dynamics of interpersonal coordination. In:

    Fuchs, A., Jirsa, V. (Eds.), Coordination: Neural, Behavioural and Social

    Dynamics. Springer-Verlag, Heidelberg.

    Singer, T.,Seymour, B.,ODoherty, J., Kaube,H., Dolan, R.J., Frith,C.D., 2004. Empathy

    for pain involves the affective but not sensory components of pain. Science 303,

    11571162.

    Tai, Y.F., Scherfler, C., Brooks, D.J., Sawamoto, N., Castiello, U., 2004. The human

    premotor cortex is mirror only for biological actions. Curr. Biol. 14, 117

    120.

    Tanaka, F., Cicourel, A., Movellan, J.R., 2007. Socialization between toddlers and

    robots at an early childhood education center. Proc. Natl. Acad. Sci. 104, 17954

    17958.

    Turing, A., 1950. Computing machinery and intelligence. Mind 59, 433460.

    Vogt, S., Buccino, G., Wohlschlager, A.M., Canessa, N., Shah, N.J., Zilles, K., Eickhoff,

    S.B., Freund, H.J., Rizzolatti, G., Fink, G.R., 2007. Prefrontal involvement in

    imitation learning of hand actions: effects of practice and expertise.

    Neuroimage 37, 13711383.

    Wicker, B.,Keysers, C.,Plailly, J., Royet, J.P., Gallese, V.,Rizzolatti, G.,2003. Both of us

    disgusted in my insula: the common neural basis of seeing and feeling disgust.

    Neuron 40, 655664.

    T. Chaminade, G. Cheng/ Journal of Physiology - Paris 103 (2009) 286295 295