Http://lasa.epfl.ch A.G. Billard, Autonomous Robots Class Spring 2007 Doctoral School – Robotics...

Post on 19-Dec-2015

223 views 1 download

Tags:

Transcript of Http://lasa.epfl.ch A.G. Billard, Autonomous Robots Class Spring 2007 Doctoral School – Robotics...

A.G. Billard, Autonomous Robots Class Spring 2007 http://lasa.epfl.ch

Doctoral School – Robotics ProgramAutonomous Robots Class

Spring 2007

Human-Robot InteractionEthical issues in human-robot interaction

research

Aude G Billard

Learning Algorithms and Systems Laboratory - LASAEPFL, Swiss Federal Institute of Technology

Lausanne, Switzerland

aude.billard@epfl.ch

A.G. Billard, Autonomous Robots Class Spring 2007 http://lasa.epfl.ch

Overview of the Class

1. Interfaces and interaction modalities (~45 min)

2. User-centered design of social robots (~45 min) + Contest (~15 min)

3. Social learning and skill acquisition via teaching and imitation (~1.15 min)

4. Robots in education, therapy and rehabilitation (~45 min)

5. Evaluation methods and methodologies for HRI research (~30 min)

6. Ethical issues in human-robot interaction research (~15 min) + DISCUSSION (~30 min)

http://lasa.epfl.ch

Ethical issues in human-robot interaction

Blay Whitby

Centre for Research in Cognitive Science

Department of Informatics

University of Sussex

blayw@sussex.ac.uk

ADAPTED

http://lasa.epfl.ch

The Humanity Principle

Technology should be built, designed and made available only in so far as it benefits humanity.

http://lasa.epfl.ch

The Glover Principle

• What Sort of People Should There Be? (1984) - A simple moratorium on human cloning is merely a delaying tactic.

• There is no ethically consistent way resisting technological advances (unless the technology is all bad).

• Therefore constant and complete ethical scrutiny is required.

http://lasa.epfl.ch

Technology Ethics

Therefore ethical input is essential at the design, and implementation phases of many contemporary technologies.

Human-like interfaces; exploitation of human affective responses; robots and other systems with intimate roles all fall into this category.

http://lasa.epfl.ch

Example Technologies

Caring systems for the elderly (Fogg and Tseng 1999)

Tutoring systems that attempt to employ or modify the affective state of the learner. (du Boulay et al 1999)

Cyber-therapies (ELIZA has not gone away) Kismet (Breazeal and Scassellati 2002) The general enthusiasm for more human-like

interfaces.

http://lasa.epfl.ch

Some Ethical Concerns

‘Vulnerable users’ Deliberate manipulation of human

emotions Interfaces, programs, and robots that

pretend or explicitly claim to be more human-like (or perhaps animal-like) than they really are

http://lasa.epfl.ch

Ethical Concerns in Practice

How vulnerable are we? The ‘uncanny valley’ ‘Human replacement’ caring systems Autism and robots

http://lasa.epfl.ch

The Uncanny Valley

Masahiro Mori (1970/2005)

http://lasa.epfl.ch

Autism and Robots

The Aurora Project, University of Hertfordshire.

Use of robots to improve communication skills in autistic children. (Woods et al 2004, 2005)

An important ethical question is: what should we do with this information?

http://lasa.epfl.ch

Claims not made

• The uncanny valley will save us all from any real harm.

• Human-like interfaces are, in general, morally repugnant.

• Human-like interfaces should not be built.

http://lasa.epfl.ch

A call to action The design, implementation, testing, and

use of all types of human-like interfaces should now be subject to ethical scrutiny.

The concept of ‘vulnerable users’ should enter our ethical vocabulary.

Interdisciplinary scientific research aimed at resolving some of the relevant empirical questions should now be encouraged.

A.G. Billard, Autonomous Robots Class Spring 2007 http://lasa.epfl.ch

Discussion

A.G. Billard, Autonomous Robots Class Spring 2007 http://lasa.epfl.ch

Imitation, Language and other Natural Means of Interactions we build for our

Machines:

Do we really want machines to resemble us that much?

A.G. Billard, Autonomous Robots Class Spring 2007 http://lasa.epfl.ch

Gender in Robots?

Role models?

A.G. Billard, Autonomous Robots Class Spring 2007 http://lasa.epfl.ch

Rational for building social machines / robots

Why do we want social robots?

A.G. Billard, Autonomous Robots Class Spring 2007 http://lasa.epfl.ch

Rational for building social machines / robots

Long history in failure to program explicitly and completely the robot, so that it could cope with any (unexpected) change in task and environment

Too difficult to handcraft the controller of the robot, as one could never foresee all the situations the robot could encounter and could never completely model the world, as perceived by the robot

Take inspiration on the way humans and other animals interact to build more robust and adaptive robots.

A.G. Billard, Autonomous Robots Class Spring 2007 http://lasa.epfl.ch

Employee / slave robots

Mid-20th-century: Industrial robots, tele-operated robotsProduce a large number of behaviors, completely preprogrammed, highly tuned to a specific environment (type of objects to manipulate, sequence of motions)

A.G. Billard, Autonomous Robots Class Spring 2007 http://lasa.epfl.ch

Employee robots

Early 80’s: learning robots; showed some adaptability in the control system; regenerated trajectories to avoid obstacles

Learning could take an unbearable long time and might not converge.

Best to start with an example so as to reduce the search space of the learning system.

Learning from demonstration: take the human as model

A.G. Billard, Autonomous Robots Class Spring 2007 http://lasa.epfl.ch

From Employee to Companion

While many engineers still struggle to accept the idea of being bio-inspired

(20th century gap across natural and exact sciences),

they no longer question the need to provide robots with such capabilities.

Why is it so?

A.G. Billard, Autonomous Robots Class Spring 2007 http://lasa.epfl.ch

From Employee to Companion

Why not conceive machines that bear no resemblance in their way to act to humans, and, expect humans

to adapt to the machine?

Indeed, there is ground for this, since, of humans and robots, humans are certainly, at this stage, the most

versatile and adaptive agents.

A.G. Billard, Autonomous Robots Class Spring 2007 http://lasa.epfl.ch

From Employee to Companion

So far, machines (cars, airplanes, radios, etc) have beenacting in their “own way”, perhaps a bit “autistic-like”:

• They do not care whether you are or not in the room.

• They do not anticipate your behavior, nor require training and verbal explanations in order to get started on their job.

• Their functioning is independent on your mood (and theirs, for that matter).

But, we are very satisfied with those machines. Why, then, do we want them to become more human-like?

A.G. Billard, Autonomous Robots Class Spring 2007 http://lasa.epfl.ch

From Employee to Companion

Ended up providing the robot with a number of capabilities that are inherent to the way humans develop and learn socially.

These are the ability:

to recognize human gestures and motions,

to interpret those in the light of the task at hand,

to use those to generate an appropriate (often imitative) response

A.G. Billard, Autonomous Robots Class Spring 2007 http://lasa.epfl.ch

Consequences of endowing robots the ability to imitate and to interpret people’s behavior

When robots will be capable to imitate and interpret our behaviors, they will not be far from offering judgment on these.

Do we want to specifically prevent them to do so?

If yes, how can we find a good tradeoff between having them capable to discriminate enough not to imitate irrelevant behaviors?

Imitation is a dangerous skill to have!

A.G. Billard, Autonomous Robots Class Spring 2007 http://lasa.epfl.ch

Consequences of endowing robots the ability to imitate and to interpret people’s behavior

The development of the child’s ability to imitate is tightly coupled to the development of a theory of mind, and of the ability to differentiate between self and others.

If robots are provided with the long-life learning abilities we hope for them, and, with that of imitation, will they develop a sense of self and a form of theory of mind?

A.G. Billard, Autonomous Robots Class Spring 2007 http://lasa.epfl.ch

Consequences of endowing robots the ability to imitate and to interpret people’s behavior

Advantage to have a sense of self:It would allow the robot to differentiate between its perception of its own actions and that of others, and between its expectations and those of the human.

The issue has more to do with: • How would the robot use this sense of self?• Would this be to the detriment of humans, or on the contrary, to their benefit?

Do we want robots to become perfect companions or perfect slaves?

A.G. Billard, Autonomous Robots Class Spring 2007 http://lasa.epfl.ch

Consequences of endowing robots the ability to imitate and to interpret people’s behavior

Will people attribute a theory of mind to the robots?

What consequences will this have on the human-robot relationship?

Should we worry about this?