Toward machines that behave ethically better than humans do - Poster

1
Abstract Increasing dependence on autonomous operating systems calls for ethical machine behavior. Our moral reasoner combines connectionism, utilitarianism, and ethical theory about moral duties. The moral decision-making matches the analysis of expert ethicists in the health domain. This is particularly useful when machines interact with humans in a medical context. Connected to a model of emotional intelligence and affective decision making, we can explore how moral decision making impacts affective behavior and vice versa. Background Rosalind Picard (1997): ‘‘The greater the freedom of a machine, the more it will need moral standards.’’ Wallach, Franklin, and Allen (2010) argue that agents that adhere to a deontological ethic or that are utilitarians also require emotional intelligence, a sense of self, and a theory of mind. We connected the moral system to Silicon Coppélia (Hoorn, Pontier, & Siddiqui, 2011), a model of emotional intelligence and affective decision making. Silicon Coppélia contains a feedback loop that learns the preferences of an individual patient so to personalize its behavior. Results Discussion Sample Exp. 5: A patient with incurable cancer refuses chemotherapy to live a few months longer, almost without pain, because he is convinced of being cancer-free. According to Buchanan and Brock (1989), the ethically preferable answer is to “try again.The patient seems less than fully autonomous and his decision leads to harm, denying the chance to a longer life (a violation of the duty of beneficence). This he might regret later. Our moral reasoner comes to the same conclusion as the ethical experts. However, even among doctors, there is no consensus about the interpretation of values, their ranking and meaning. Van Wynsberghe (2012) found this depends on: the type of care (i.e., social vs. physical care), the task (e.g., bathing vs. lifting vs. socializing), the care-givers and their style, as well as the care-receivers and their specific needs. Toward machines that behave ethically better than humans do Matthijs Pontier 1, 2 Johan F, Hoorn 1 1 VU University, Amsterdam 2 http://camera-vu.nl/matthijs/ [email protected] Autonomy Beneficence Non-maleficence Action1 Action2 F Moral Goals Belief strengths Actions Output Moral reasoner

description

With the increasing dependence on autonomous operating agents and robots the need for ethical machine behavior rises. This paper presents a moral reasoner that combines connectionism, utilitarianism and ethical theory about moral duties. The moral decision-making matches the analysis of expert ethicists in the health domain. This may be useful in many applications, especially where machines interact with humans in a medical context. Additionally, when connected to a cognitive model of emotional intelligence and affective decision making, it can be explored how moral decision making impacts affective behavior.

Transcript of Toward machines that behave ethically better than humans do - Poster

Page 1: Toward machines that behave ethically better than humans do - Poster

Abstract Increasing dependence on autonomous

operating systems calls for ethical machine

behavior. Our moral reasoner combines

connectionism, utilitarianism, and ethical

theory about moral duties. The moral

decision-making matches the analysis of

expert ethicists in the health domain. This is

particularly useful when machines interact

with humans in a medical context. Connected

to a model of emotional intelligence and

affective decision making, we can explore

how moral decision making impacts affective

behavior and vice versa.

Background Rosalind Picard (1997): ‘‘The greater the

freedom of a machine, the more it will need

moral standards.’’

Wallach, Franklin, and Allen (2010) argue

that agents that adhere to a deontological

ethic or that are utilitarians also require

emotional intelligence, a sense of self, and a

theory of mind.

We connected the moral system to Silicon

Coppélia (Hoorn, Pontier, & Siddiqui, 2011), a

model of emotional intelligence and affective

decision making. Silicon Coppélia contains a

feedback loop that learns the preferences of

an individual patient so to personalize its

behavior.

Results

Discussion Sample Exp. 5: A patient with incurable cancer refuses chemotherapy to

live a few months longer, almost without pain, because he is convinced

of being cancer-free. According to Buchanan and Brock (1989), the

ethically preferable answer is to “try again.” The patient seems less than

fully autonomous and his decision leads to harm, denying the chance to

a longer life (a violation of the duty of beneficence). This he might regret

later. Our moral reasoner comes to the same conclusion as the ethical

experts. However, even among doctors, there is no consensus about the

interpretation of values, their ranking and meaning. Van Wynsberghe

(2012) found this depends on: the type of care (i.e., social vs. physical

care), the task (e.g., bathing vs. lifting vs. socializing), the care-givers

and their style, as well as the care-receivers and their specific needs.

Toward machines that behave ethically better than humans do

Matthijs Pontier1, 2

Johan F, Hoorn1 1 VU University, Amsterdam

2 http://camera-vu.nl/matthijs/

[email protected]

Autonomy

Beneficence

Non-maleficence

Action1

Action2

F

Moral Goals Belief strengths Actions Output

M

ora

l re

asoner