Tempo Adaptation and Anticipation Methods for … · Tempo Adaptation and Anticipation Methods for...

3
Tempo Adaptation and Anticipation Methods for Human-Robot Teams Tariq Iqbal, Maryam Moosaei, and Laurel D. Riek Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, 46556 USA. e-mail: {tiqbal, mmoosaei, lriek}@nd.edu. Abstract—As technology advances, autonomous robots are becoming more prominent in our daily lives, and they will be expected to effectively work with groups of humans. In order to be effective teammates, robots need to be able to understand human team dynamics as humans do. This understanding will help robots to recognize, anticipate, and adapt to human motion. There exist many group interac- tion scenarios where the activities performed by the group members are not only synchronous, but also change their tempo over time. During these interactions, humans employ temporal adaptation and anticipation to coordinate with one another in a team. The goal of our work is to leverage these mechanisms to enable robots to better coordinate with human teams, with a particular eye toward understanding rhythm. We are developing methods for robots to better understand tempo changing behavior in teams, so that the robots can leverage this knowledge to coordinate with group members. This work will enable robots to recognize, anticipate, and adapt to human groups, and will help enable others in the robotics community to build more fluent and adaptable robots in the future. I. I NTRODUCTION Autonomous robots are adopting a variety of roles in our everyday lives. While robots have long been involved in assembly lines, automating and increasing efficiency of dexterous factory procedures [1], their use is growing in other areas where humans and robots are working together proximately. For example, robots are being used to provide physical and behavioral healthcare, both in clinical and homecare settings [2]–[4]. One goal of this trend for roboticists is to enable robots to fluidly, and longitudinally, act as independely and efficeintly as possible. In a large majority of proximate settings, it is rarely one human and one robot. Instead, multiple people interact with one another and the robot simulatenously. Due to the challenging nature of these kinds of sensing situations, it can be difficult for a robot to perceive and understand all of the different movements of people to make effective decisions as a team mate. Group interaction is an important aspect of human social interaction, and is an active area of research [5]– [8]. Most groups create a state of interdependence, where each member’s outcomes and actions are determined in part by other members of the group. Understanding the internal dynamics of the group enables robot to interact more coherently and contingently with the other team members [9]. As humans, we encounter many group interaction sce- narios where the activities performed by the group mem- bers are not only synchronous, but also change their tempo over time. Humans effectively time their actions to coordinate with others during everyday activities. Many researchers in the fields of psychology and neuroscience investigated how humans effectively time their coordinated actions with others, especially in a constantly changing environment [10]. One important aspect of successful coordination is precise motor timing. Sensorimotor synchronization is one of the skills that people employ to achieve this [10], [11]. Sensorimotor synchronization is considered to be a fundamental human skill, which is described as the temporal coordination of an action with events in a predictable external rhythm. Humans are skilled at sensorimotor synchronization even with a sequence that contains tempo changes [11]. Thus, while working alongside humans, if robots can understand rhythmic tempo changes, then they can adapt to those changes and adjust their actions accordingly. Understanding tempo changing behavior will enable us to build adaptable robots across a range of fields, from manu- facturing and assembly processes, to assistive technologies to help people with disabilities. If robots can make sense of tempo changes as humans do, then it will be possible for them to adapt and adjust their motions while working with humans. This will help robots to become functional teammates to people. Researchers in the fields of music, dance, neuroscience, and psychology investigated the sensory motor synchro- nization process of humans [10], [11]. These investigations included joint finger tapping with external rhythmic sig- nals or with virtual characters, or joint drumming or joint dancing with other humans [12]. Researchers have found two main concepts that humans employ during coordina- tion with other external rhythms: temporal adaptation and temporal anticipation [11], [13]. Despite the fact that humans perform temporal adapta- tion and anticipation during the synchronization of their movements, robots still do not know how to take advantage of these concepts. In this work, we plan to explore how

Transcript of Tempo Adaptation and Anticipation Methods for … · Tempo Adaptation and Anticipation Methods for...

Page 1: Tempo Adaptation and Anticipation Methods for … · Tempo Adaptation and Anticipation Methods for Human-Robot Teams Tariq Iqbal, ... or joint drumming or joint ... approach uses

Tempo Adaptation and Anticipation Methods forHuman-Robot Teams

Tariq Iqbal, Maryam Moosaei, and Laurel D. RiekDepartment of Computer Science and Engineering,

University of Notre Dame, Notre Dame, IN, 46556 USA.e-mail: {tiqbal, mmoosaei, lriek}@nd.edu.

Abstract—As technology advances, autonomous robots arebecoming more prominent in our daily lives, and they willbe expected to effectively work with groups of humans. Inorder to be effective teammates, robots need to be ableto understand human team dynamics as humans do. Thisunderstanding will help robots to recognize, anticipate, andadapt to human motion. There exist many group interac-tion scenarios where the activities performed by the groupmembers are not only synchronous, but also change theirtempo over time. During these interactions, humans employtemporal adaptation and anticipation to coordinate with oneanother in a team. The goal of our work is to leveragethese mechanisms to enable robots to better coordinate withhuman teams, with a particular eye toward understandingrhythm. We are developing methods for robots to betterunderstand tempo changing behavior in teams, so thatthe robots can leverage this knowledge to coordinate withgroup members. This work will enable robots to recognize,anticipate, and adapt to human groups, and will help enableothers in the robotics community to build more fluent andadaptable robots in the future.

I. INTRODUCTION

Autonomous robots are adopting a variety of roles inour everyday lives. While robots have long been involvedin assembly lines, automating and increasing efficiency ofdexterous factory procedures [1], their use is growing inother areas where humans and robots are working togetherproximately. For example, robots are being used to providephysical and behavioral healthcare, both in clinical andhomecare settings [2]–[4]. One goal of this trend forroboticists is to enable robots to fluidly, and longitudinally,act as independely and efficeintly as possible.

In a large majority of proximate settings, it is rarelyone human and one robot. Instead, multiple people interactwith one another and the robot simulatenously. Due to thechallenging nature of these kinds of sensing situations, itcan be difficult for a robot to perceive and understand allof the different movements of people to make effectivedecisions as a team mate.

Group interaction is an important aspect of humansocial interaction, and is an active area of research [5]–[8]. Most groups create a state of interdependence, whereeach member’s outcomes and actions are determined inpart by other members of the group. Understanding theinternal dynamics of the group enables robot to interact

more coherently and contingently with the other teammembers [9].

As humans, we encounter many group interaction sce-narios where the activities performed by the group mem-bers are not only synchronous, but also change theirtempo over time. Humans effectively time their actions tocoordinate with others during everyday activities. Manyresearchers in the fields of psychology and neuroscienceinvestigated how humans effectively time their coordinatedactions with others, especially in a constantly changingenvironment [10].

One important aspect of successful coordination isprecise motor timing. Sensorimotor synchronization isone of the skills that people employ to achieve this[10], [11]. Sensorimotor synchronization is consideredto be a fundamental human skill, which is describedas the temporal coordination of an action with eventsin a predictable external rhythm. Humans are skilled atsensorimotor synchronization even with a sequence thatcontains tempo changes [11].

Thus, while working alongside humans, if robots canunderstand rhythmic tempo changes, then they can adaptto those changes and adjust their actions accordingly.Understanding tempo changing behavior will enable us tobuild adaptable robots across a range of fields, from manu-facturing and assembly processes, to assistive technologiesto help people with disabilities. If robots can make senseof tempo changes as humans do, then it will be possiblefor them to adapt and adjust their motions while workingwith humans. This will help robots to become functionalteammates to people.

Researchers in the fields of music, dance, neuroscience,and psychology investigated the sensory motor synchro-nization process of humans [10], [11]. These investigationsincluded joint finger tapping with external rhythmic sig-nals or with virtual characters, or joint drumming or jointdancing with other humans [12]. Researchers have foundtwo main concepts that humans employ during coordina-tion with other external rhythms: temporal adaptation andtemporal anticipation [11], [13].

Despite the fact that humans perform temporal adapta-tion and anticipation during the synchronization of theirmovements, robots still do not know how to take advantageof these concepts. In this work, we plan to explore how

Page 2: Tempo Adaptation and Anticipation Methods for … · Tempo Adaptation and Anticipation Methods for Human-Robot Teams Tariq Iqbal, ... or joint drumming or joint ... approach uses

Fig. 1. Left: A representation of the adaptation process for the robot. The next event time of the robot is adapted from the timekeeper interval (Tn),the phase (α) and period (β) correction parameters, and the most recent asynchrony (asynn). Right: An anticipation method by linear extrapolationof inter-event intervals. The figures are adapted from [11].

robots can leverage these mechanisms to coordinate withother external rhythms. The goal of this exploration isto develop robust algorithms for robots that are sensitiveto tempo changes during group activities, which willenable more fluent human-robot teaming. In this paper, wedescribe the initial development phases of these methods,and briefly discuss our experimental plans.

II. METHODOLOGY

We are designing and implementating a new algorithmfor robots to interact within human-robot teams, whichwill be robust to changes in the tempo of a rhythmic signal.Drawing inspiration from computational neuroscience andpsychology [10], [11], [14], we are designing an adaptablealgorithm for robots to work in human-robot teams. Thisalgorithm will have two main processes: an adaptationprocess, and an anticipation process.

A robot will use the temporal adaptation part of thealgorithm to compensate for temporal errors that occurwhile synchronizing with an external rhythm. Using thisadaptation mechanism, a robot will be able to overcomethese challenges, and be able to change its motion timingto be coordinate to an external rhythm.

Extending work by Van Der Steen et al. [11], [14],we developed an adaptation process based on informationprocessing theory [15], and extended it to work for multi-ple events encountered by robots during an activity. Thisapproach uses a linear timekeeper model to compensate fortiming errors on a cycle-by-cycle basis. In the timekeepermodel, the error correction process can be described asa linear autoregressive process. Two separate adaptiveprocesses, phase correction and period correction, havebeen proposed in information-processing theory [11].

During phase correction, only the timing of the nextaction is adjusted to compensate the asynchrony witha perceived rhythmic signal, but the timekeeper intervalremains the same. During the period correction phase, thenext action timing is adjusted, and the internal timekeepersetting is changed to compensate for the perceived asyn-chrony. Both processes independently modify the timingof the next action based on a percentage of the asynchrony[11], [14], [16].

The timing of the next event (tn + 1) is based on thecurrent action time (tn), and a timekeeper (Tn) with thecompensation of the phase (α) and period (β) correction

by the most recent asynchrony (asynn) between therecent action and the perceived rhythmic signal [11]. Inaddition, the timekeeper period is compensated for therecent asynchrony and the period correction parameter (seeEquation 1 and Figure 1-Right).

t(n+1) = t(n) + T(n) − (α+ β) ∗ asyn(n)T(n+1) = T(n) − β ∗ asyn(n)

(1)

Additionally, the anticipation part of the algoritm willgenerate a prediction about the timing of the next action,so that it coincides with the timing of the next externalrhythmic signal [11]. During this anticipation process, thehuman actions will be taken as the precursor to generate aprediction about the next timing of an event in the externalrhythmic signal (see Figure 1-Left).

III. EXPERIMENTAL TESTBED

To test our algorithm, we have designed an experi-mental testbed involving a human-robot drumming team.Figure 2-Left shows the experimental setup. A group ofhumans drums together with a drumming robot. Musicis played so that humans can drum at the same paceto an external rhythm. The group’s goal is to drumsynchronously with the external music, while coordinatingwith one another and the robot.

Three participants stand near a table. There is a drumpad and a drumstick in front of each participant. The robotis placed on the other side of the table. The participantsare able to see the drumming of the other participants andhear drumming sounds. The robot’s movements are visiblefrom each participant on the other side of the table. (SeeFigure 2-Center).

A. The Drumming Robot

We built a robot capable of drumming with a humangroup. The robot is able to move a drum stick up anddown at a rate of 4-5 Hz to hit a drum. This rate is veryclose to how fast humans generally can drum (5-7 Hz).Thus this robot is well-suited for our purpose.

The robot’s effector (the drumstick) is attached to aFutaba Heli Rudder High-Speed servo motor with speed of0.06 sec/60◦ and torque of 47 oz-in (3.4 kg-cm). In addi-tion, the servo motor is connected to an Arduino Uno anda Renbotic Servo Shield. This arduino communicates witha computer via rosserial, and is used as an ROS node [17].

Page 3: Tempo Adaptation and Anticipation Methods for … · Tempo Adaptation and Anticipation Methods for Human-Robot Teams Tariq Iqbal, ... or joint drumming or joint ... approach uses

Fig. 2. Left: Experimental setup. A Kinect 2 tracks participant motion, as do pizeoelectric sensors in the drums. The robot uses these sensor datato decide when to hit its drum. Center/Right: Three participants drum with the robot during a pilot study. RGB and IR images are shown.

B. System Architecture and Activity Detection

We used a client-server architecture for data processingand controlling the robot that is very similar to what hasbeen used in our previous experiments [9], [18], [19]. Inthis architecture, the client computers detect the eventsand their timings during the interaction, and send datato a server. The server then processes this data usingthe aforementioned adaptation and anticipation methods,and generates appropriate commands for the robot. Itthen sends the appropriate commands to the robot at theappropriate time.

To capture the activities of the participants, we used aMicrosoft Kinect v2 sensor. We placed infrared markersat the top of the drumsticks to track the drumstick move-ments precisely. From the infrared images, we performed ablob tracking technique to track the top of the drumsticks.Figure 2-Right shows an infrared image during the drum-ming. For faster processing, we detected a small regionof interest (ROI) for each drumstick positions, and onlysearched in that ROI in subsequent frames and updatedthe ROI positions.

When the humans start moving their drumsticks towardsthe drum pad, we detected the start of these events bytracking the infrared blobs. This provided an indicator thatthe humans were going to hit the drum. We used this asa precursor to inform the robot’s movement.

To precisely measure when the humans and the robothit the drum pad, we attached a piezoelectric sensor undereach drum pad. These sensors attached to an arduino.Whenever the humans or the robot hit the drum pad,the arduino generated a “hit” event. The arduino ROSnode published this event time, as well as the eventtype (‘hit’) instantaneously. The server listened to thesemessages. For now, our method incorporates visual andtactile rhythmic information, though in the future we planto also incorporate audio.

IV. EXPERIMENTAL PLAN

We are currently implementing the algorithm to test onthe robot. Upon completion, we plan to perform a setof experiments, where the robot will drum with humansacross a variety of rhythmic conditions. We will employdifferent aspects of the algorithm on the robot to investi-gate how well it performs in human-robot teams.

These studies will give us the opportunity to exploremany dimensions of the process of fluent human-robotinteraction, and identify which features will need attentionduring future robot algorithm development and interactiondesign.

REFERENCES

[1] R. Wilcox, S. Nikolaidis, and J. A. Shah, “Optimization of tem-poral dynamics for adaptive human-robot interaction in assemblymanufacturing.” in Proc. of RSS, 2012.

[2] T. Carlson and Y. Demiris, “Collaborative control for a roboticwheelchair: evaluation of performance, attention, and workload.”IEEE Trans. Syst. Man. Cybern. B. Cybern., 2012.

[3] H. I. Krebs and et al., “Rehabilitation robotics: Performance-basedprogressive robot-assisted therapy,” Autonomous robots, 2003.

[4] L. D. Riek, “Robotics technology in mental health care,” ArtificialIntelligence in Behavioral and Mental Health Care, 2015.

[5] N. Sebanz, H. Bekkering, and G. Knoblich, “Joint action: bodiesand minds moving together.” T. Cogn. Sci., 2006.

[6] D. R. Forsyth, Group dynamics, 4th ed. T. Wadsworth, 2009.[7] M. P. Michalowski, R. Simmons, and H. Kozima, “Rhythmic

attention in child-robot dance play,” in IEEE RO-MAN, 2009.[8] S. Devin, G. Milliez, M. Fiore, A. Clodic, and R. Alami, “Some

essential skills and their combination in an architecture for acognitive and interactive robot,” CoRR, 2016.

[9] T. Iqbal, S. Rack, and L. D. Riek, “Movement coordination inhuman-robot teams: A dynamical systems approach,” IEEE Trans-actions on Robotics, 2016.

[10] B. H. Repp, “Sensorimotor synchronization: A review of thetapping literature,” Psychonomic bulletin & review, 2005.

[11] M. C. M. van der Steen and P. E. Keller, “The ADaptation andAnticipation Model (ADAM) of sensorimotor synchronization.”Front. Hum. Neurosci., 2013.

[12] G. Hoffman and G. Weinberg, “Gesture-based human-robot jazzimprovisation,” in IEEE ICRA, 2010.

[13] R. I. Schubotz, “Prediction of external events with our motorsystem: towards a new framework,” Trends Cogn Sci, 2007.

[14] M. M. van der Steen, N. Jacoby, M. T. Fairhurst, and P. E. Keller,“Sensorimotor synchronization with tempo-changing auditory se-quences: Modeling temporal adaptation and anticipation,” Brainresearch, 2015.

[15] A. Semjen, D. Vorberg, and H.-H. Schulze, “Getting synchronizedwith the metronome: Comparisons between phase and periodcorrection,” Psychological Research, 1998.

[16] B. H. Repp and P. E. Keller, “Sensorimotor synchronization withadaptively timed sequences,” Human movement science, 2008.

[17] M. Moosaei, C. J. Hayes, and L. D. Riek, “Performing facialexpression synthesis on robot faces: A real-time software system,”New Frontiers in Human-Robot Interaction.

[18] T. Iqbal and L. D. Riek, “A Method for Automatic Detectionof Psychomotor Entrainment,” IEEE Transactions on AffectiveComputing, 2015.

[19] T. Iqbal, M. J. Gonzales, and L. D. Riek, “Joint action perceptionto enable fluent human-robot teamwork,” in IEEE Int’l Symposiumon Robot and Human Interactive Communication, 2015.