Human-Robot Interaction -State of the Art- João Quintas.

17
Human-Robot Interaction -State of the Art- João Quintas

Transcript of Human-Robot Interaction -State of the Art- João Quintas.

Human-Robot Interaction

-State of the Art-

João Quintas

Introduction

“Human-robot interaction (HRI) is the study of interactions between people (users) and robots. HRI is multidisciplinary with contributions from the fields of human-computer interaction, artificial intelligence, robotics, natural language understanding, and social science (psychology, cognitive science, anthropology, and human factors)…”. [1]

GOAL

“The basic goal of HRI is to develop principles and algorithms to allow more natural and effective communication and interaction between humans and robots….” [1]

ConferencesPrincipal Conferences:• The IEEE International Workshop on Robot and Human Interactive Communication ( RO-MAN ) was founded in 1992 by Profs. Toshio Fukuda, Hisato Kobayashi, Hiroshi Harashima and Fumio Hara. Early workshop participants were mostly Japanese, and the first eight workshops were held in Japan. Since 2000, workshops have been held in Europe and the United States as well as Japan, and participation has been of international scope.• The first ACM International Conference on Human-Robot Interaction (HRI 2006) was held in March 2006.• The second ACM/IEEE International Conference on Human-Robot Interaction (HRI 2007) was held in March 2007.• The third ACM/IEEE International Conference on Human-Robot Interaction (HRI 2008) was held in March 2008.• The first International Conference on Human-Robot Personal Relationships (HRPR 2008) was held in June 2008.

Related Conferences:• IEEE-RAS/RSJ International Conference on Humanoid Robots (Humanoids)• Ubiquitous Computing (UbiComp)• IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)• Intelligent User Interfaces (IUI)• Computer Human Interaction (CHI)• American Association for Artificial Intelligence (AAAI)• Interact

Application-Oriented HRI ResearchSome application areas are:• Search and Rescue• Entertainment• Education• Field robotics• Home and companion robotics• Hospitality• Rehabilitation and Elder care

Although the fields of application of HRI are very extensive. Until now the main subject for researchers have been related with types of interaction with robots.

Types of Interaction

Vision-Based Human-Robot Interaction“The vision system of a social robot is the responsible of solve tasks like identifying faces, measuring head and hands poses, capturing human motion, recognizing gestures and reading facial expressions …“ [2]

Some issues related with vision-based HRI are:• Face detection and recognition• Gesture recognition• Human behaviour capture and imitation•Vision system architecture

Some applications related with vision-based HRI are:•Assistive robotics• Human-guided learning• Visual attention mechanism• Biologically inspired social robot model

Vision-Based HRI examples

Understanding Human Intentions via Hidden Markov Models in Autonomous Mobile Robots [3]

A Gesture Based Interface for Human-Robot Interaction [4]

Audio-Based Human-Robot Interaction

In audio based approaches, the interaction between human-robot or robot-environment is made using sounds from a surrounding area. In HRI this approach is used for instance in speech recognition or guiding the robot attention to a specific spot.

Some examples:

Fear-type emotion recognition for future audio-based surveillanceSystems [5]

Robot Audition Project (video) [6]

Touch feeling-based Human-Robot Interaction

This type of interaction tries to mimic the touch sensation of humans into robots, using different kinds of sensors. Could be an appropriate way of interaction for people with some kind handicap, that must rely on touch feeling to interact with the environment (i.e. guiding robot for blind people).

Using proprioceptive sensors for categorizing human-robot interactions [9]

(duvida aqui: será que esta parte não ficará melhor na multi-modal based interaction e depois aparecer como uma aplicação)????

Multi-Modal Interaction / Human-Robot Interaction

“…approach is based on a method for multi-modal person tracking which uses a pan-tilt camera for face recognition, two microphones for sound source localization, and a laser range finder for leg detection…” [8]

The multi-modal approach uses different types of sensors and devices with the objective of ensure a better interaction between human and robotics.

This area of study is being actively developed. Some examples of works done in this field can be seen in:

Multi-Modal attention system for a mobile robot [8]

Fields of Application

Search and RescueHuman–Robot Interactions During the Robot-Assisted Urban Search and Rescue Response at the World Trade Center [10]

Some types of Interfaces

Robotic User Interfaces [Bartneck & Okada [11]]

References

[1] http://en.wikipedia.org/wiki/Human_robot_interaction (October 22, 2008);

[2] http://paloma.isr.uc.pt/~hri06/ (October 22, 2008);

[3] “Understanding Human Intentions via Hidden Markov Models in Autonomous Mobile Robots”, Richard Kelley, Monica Nicolescu, Alireza Tavakkoli, Mircea Nicolescu, Christopher King,George Bebis;

[4] “A Gesture Based Interface for Human-Robot Interaction”, STEFAN WALDHERR, ROSELI ROMERO, SEBASTIAN THRUN;

[5] “Fear-type emotion recognition for future audio-based surveillance systems”, C. Clavel, I. Vasilescu, L. Devillers, G. Richard, T. Ehrette, June 2008 Speech Communication , Volume 50 Issue 6, Publisher: Elsevier Science Publishers B. V.

[6] Robot Audition Project – Kyoto University, Japan (2006);

[7] http://www.nrl.navy.mil/aic/iss/aas/IntelligentHumanRobotInteractions.php (October 22, 2008);

[8] “Providing the Basis for Human-Robot-Interaction: A Multi-Modal Attention System for a Mobile Robot”, Sebastian Lang, Marcus Kleinehagenbrock, Sascha Hohenner, Jannik Fritsch, Gernot A. Fink, and Gerhard Sagerer, in November 2003 ICMI '03: Proceedings of the 5th international conference on Multimodal interfaces ;

[9] “Using proprioceptive sensors for categorizing human-robot interactions”, T. Salter, F. Michaud, D. Létourneau, D. C. Lee, I. P. Werry, in March 2007 HRI '07: Proceedings of the ACM/IEEE international conference on Human-robot interaction;

[10] “Human–Robot Interactions During the Robot-Assisted Urban Search and Rescue Response at the World Trade Center “, Jennifer Casper and Robin Roberson Murphy;

[11] “Robotic User Interfaces”, Christoph Bartneck, Michio Okada, in Proceedings of the Human and Computer Conference (HC2001), Aizu pp. 130-140;