Robots, Design for Values and Dennett's Stancesk4naoe/pdf/0704_1_1_Pieter_Vermaas.pdf · Pieter E....

Post on 28-Nov-2020

4 views 0 download

Transcript of Robots, Design for Values and Dennett's Stancesk4naoe/pdf/0704_1_1_Pieter_Vermaas.pdf · Pieter E....

Robots, Design for Values and Dennett's Stances

July 4, 2018

Pieter E. VermaasDelft University of Technologyp.e.vermaas@tudelft.nl

Robots, Dennett's Stances and Design for Values

July 4, 2018

Pieter E. VermaasDelft University of Technologyp.e.vermaas@tudelft.nl

Aim

To adopt and extend Dennett’s stances to descriptions of robots

To argue for two conceptual advantages:

1. side-stepping the issue whether robots are conscious

2. identifying values in responses when predictions about robots fail

Plan

1. Introduce ingredient 1 of Dennett: the three stances

2. Give the conceptual advantage to describing robots

3. Add engineering complexity to Dennett’s stances

4. Introduce ingredient 2 of Dennett: when stances fail

5. Identify values in responses when predictions about robots fail

Ingredient 1: Dennett’s three stances

P: physical stance

D: design stance

I: intentional stance

More epistemology than ontology

We apply stances to entities primarily if that enables us to effectively and efficiently predict their behavior

P: in the physical stance one applies the natural sciences to entities

D: in the design stance one ascribes functions and goals to entities

I: in the intentional stance one

• takes an entity as conscious and rational, and

• assumes it has certain observations, thoughts and aims

The conceptual advantage

Ontological issues whether entities are actually designed or really conscious are side-stepped

Dennett: we take aliens as

conscious if the intentional stance

efficiently predicts their behavior

For our topic: we take robots as

conscious if the intentional stance

efficiently predicts their behavior

An argument for splitting the design stance

P: physical stance

D: design stance

I: intentional stance

Vermaas, Carrara, Borgo, Garbacz (2013) Synthese 190, 1131

An argument for splitting the design stance

An efficient description of a technical artefact refers to the functions and goals of the artefact as assigned by an engineer

D: design stance

I: intentional stance

Vermaas, Carrara, Borgo, Garbacz (2013) Synthese 190, 1131

Four stances

P: physical stance

D: design stance

I: intentional stance

iD: intentional design stance

Many more stances

We combine Dennett’s three stances in any combination

iD: intentional design stance

• Technical artefacts

pI: physical intentional stance

• Evolution of the mind

iI: intentional intentional stance

• Robots

iI: The intentional intentional stance

1. take two entity as conscious and rational

2. assume that they both have certain observations, thoughts and aims

3. presuppose that the one (the robot) is designed by the other (the engineer)

So what?

Still the conceptual advantage is that ontological issues of whether robots are really conscious are side-stepped

Additionally the different stances allow making conceptual distinctions in our responses when predictions about robots fail

Ingredient 2: Dennett’s analysis of when stances fail

If we apply the intentional stance I, then we:

• take an entity as conscious and rational

• assume that it has certain observations, thoughts and aims

When our predictions fail, then we:

I → I’: adjust our assumptions about the intentions, or

I → D: try applying the design stance.

or

Ingredient 2: Dennett’s analysis of when stances fail

If we apply the design stance D, then we assign functions and goals to an entity

When our predictions fail, then we:

D → D’: adjust our functions and goals assignment, or

D → P: try applying the physical stance.

or

What about the intentional design stance?

With the intentional design stance iD, we assign functions and goals to an artefact relative to intentions of an engineer

When our predictions fail, then we:

iD → D’: adjust our functions and goals assignment

iD → P: try applying the physical stance, or

iD → i’D: adjust our assumptions about the engineer

or

When predictions about technical artefacts fail

Volkswagen cars have

higher emission than predicted

iD → iD’: refine the VW description of the engine

iD → i’D: redescribe the intentions of the VW engineers

iD → i’D’: redescribe the intentions of the VW engineers and adjust the VW description of the engine

iD → D’: adjust our description of the engine

iD → P: calculate the whole chemical process in the engine

When predictions about robots fail

The intentional intentional stance iI:

1. take two entity as conscious and rational

2. assume that they both have certain observations, thoughts and aims

3. presuppose that the one (the robot) is designed by the other (the engineer)

When predictions about robots fail

When intentional intentional stance iI predictions fail, we:

iI → iI’: adjust the intentions we assign to the robot

iI → i’I: adjust the intentions we assign to the engineer

iI → i’I’: adjust the intentions we assign to both

iI → iD: applying the intentional design stance to the robot

iI → i’D: applying the intentional design stance to the robot and adjust the intentions of the engineer

iI → I’: apply the intentional stance to the robot

iI → D: apply the design stance to the robot

When predictions about Tesla cars fail

iI → iI’: adjust the car’s observations and decision algorithms

iI → i’I: adjust the intentions we assign to Tesla

iI → i’I’: adjust the intentional description of both

iI → iD: go through the technical description by Tesla

iI → i’D: adjust the intentions we assign to Tesla and create a (new) technical description of the car

iI → I’: describe the car as an independent intentional entity

iI → D: describe the car as complex entity

When predictions about robots fail

iI → iI’: adjust the intentions we assign to the robot

iI → i’I: adjust the intentions we assign to the engineer

iI → i’I’: adjust the intentions we assign to both

iI → iD: applying the intentional design stance to the robot

iI → i’D: applying the intentional design stance to the robot and adjust the intentions of the engineer

iI → I’: apply the intentional stance to the robot

iI → D: apply the design stance to the robot

The second conceptual advantage

These responses to failures in predictions of the behavior of robots can be grouped for

• showing that it makes sense to introduce the intentional intentional stance

• identifying values at play in these responses

Responses that include the design context

Efficient responses using all information available

iI → iI’: adjust the intentions we assign to the robot

iI → i’I: adjust the intentions we assign to the engineer

iI → i’I’: adjust the intentions we assign to both

iI → iD: applying the intentional design stance to the robot

iI → i’D: applying the intentional design stance to the robot and adjust the intentions of the engineer

iI → I’: apply the intentional stance to the robot

iI → D: apply the design stance to the robot

Take the robot as stand-alone

Less efficient responses by discarding information

iI → iI’: adjust the intentions we assign to the robot

iI → i’I: adjust the intentions we assign to the engineer

iI → i’I’: adjust the intentions we assign to both

iI → iD: applying the intentional design stance to the robot

iI → i’D: applying the intentional design stance to the robot and adjust the intentions of the engineer

iI → I’: apply the intentional stance to the robot

iI → D: apply the design stance to the robot

Assigning new intentions to only the robot

Assuming autonomy of the robot

iI → iI’: adjust the intentions we assign to the robot

iI → i’I: adjust the intentions we assign to the engineer

iI → i’I’: adjust the intentions we assign to both

iI → iD: applying the intentional design stance to the robot

iI → i’D: applying the intentional design stance to the robot and adjust the intentions of the engineer

iI → I’: apply the intentional stance to the robot

iI → D: apply the design stance to the robot

Assigning new intentions to the engineer

Assuming conspiracy: robots as means

iI → iI’: adjust the intentions we assign to the robot

iI → i’I: adjust the intentions we assign to the engineer

iI → i’I’: adjust the intentions we assign to both

iI → iD: applying the intentional design stance to the robot

iI → i’D: applying the intentional design stance to the robot and adjust the intentions of the engineer

iI → I’: apply the intentional stance to the robot

iI → D: apply the design stance to the robot

Aim

To adopt and extend Dennett’s stances to descriptions of robots

To argue for two conceptual advantages:

1. side-stepping the issue whether robots are conscious

2. identifying values in responses when predictions about robots fail

Aim

To adopt and extend Dennett’s stances to descriptions of robots

To argue for two conceptual advantages:

1. side-stepping the issue whether robots are conscious

2. identifying values in responses when predictions about robots fail

– efficient responses taking robots as designed

– inefficient responses taking robots as stand-alone

– assuming robot autonomy

– assuming engineering conspiracy

Designing robots for values: transparency

Make explicit

1. that robots are designed

efficiency in prediction of behavior and responses to failures

2. whether robots should stick to purposes or are autonomous

avoid that robots are unnecessarily taken as autonomous

3. what tasks robots should do

avoid conspiracy thinking