Moral Coppélia: Affective moral reasoning with twofold autonomy and a touch of personality
Matthijs Pontier
Overview of this presentation
• SELEMCA• Moral Reasoning• Silicon Coppelia: Model of Emotional Intelligence• Moral Reasoning + Silicon Coppelia = Moral Coppelia• Predicting Crime with Moral Coppelia• Conclusion• Future Work
London, 03-04-2014 MEMCA-14 Symposium at AISB50 2
SELEMCA• Develop ‘Caredroids’: Robots or Computer Agents
that assist Patients and Care-deliverers• Focus on patients who stay in long-term care facilities
3London, 03-04-2014 MEMCA-14 Symposium at AISB50
Possible functionalities
• Care-broker: Find care that matches need patient
• Companion: Become friends with the patient to prevent loneliness and activate the patient
• Coach: Assist the patient in making healthy choices: Exercising, Eating healthy, Taking medicine, etc.
4London, 03-04-2014MEMCA-14 Symposium at AISB50
Applications: Care Agents
5London, 03-04-2014MEMCA-14 Symposium at AISB50
Applications: Care Robots
6London, 03-04-2014MEMCA-14 Symposium at AISB50
Background Machine Ethics
• Machines are becoming more autonomous
Rosalind Picard (1997): ‘‘The greater the freedom of a machine, the more it will need moral standards.’’
• Machines interact more with peopleWe should manage that machines do not harm us or
threaten our autonomy
• Machine ethics is important to establish perceived trust in users
7London, 03-04-2014MEMCA-14 Symposium at AISB50
Domain: Medical Ethics
• Within SELEMCA, we develop caredroids• Patients are in a vulnerable position.
Moral behavior of robot is extremely important.We focus on Medical Ethics
• Conflicts between:
1. Beneficence
2. Non-maleficence
3. Autonomy
4. Justice
London, 03-04-2014 MEMCA-14 Symposium at AISB50 8
Moral reasoning system
We developed a rational moral reasoning system that is capable of balancing between conflicting moral goals.
9London, 03-04-2014MEMCA-14 Symposium at AISB50
Positive vs Negative Autonomy
• Negative Autonomy = Self-determination• Freedom of others
• Autonomy is more than self-determination• Being able to make a meaningful choice• Act in line with well-considered preferences
• Positive Autonomy =
Freedom to make a meaningful choice
London, 03-04-2014 MEMCA-14 Symposium at AISB50
Typical moral dilemmas Caredroids will encounter• Positive vs Negative Autonomy:
• Accept unhealthy choice vs Persuade reconsider• Binding to previous agreement vs Giving up
Expand moral principle of Autonomy
London, 03-04-2014 MEMCA-14 Symposium at AISB50
Model autonomy
London, 03-04-2014 12MEMCA-14 Symposium at AISB50
New Moral Reasoning System
London, 03-04-2014 MEMCA-14 Symposium at AISB50
Results
London, 03-04-2014 MEMCA-14 Symposium at AISB50
• New moral reasoning system matches decisions previous moral reasoning system
• Simulation of 2008-2012 NL law cases:• Case 1: Assertive outreach to prevent judicial coercion
• Patient in demise. Aggression Assertive Outreach• Case 2: Inform care deliverers, not parents of adult
• Patient in alarming situation. No good contact with parents.• Case 3: Neg. autonomy constrained to enhance pos. autonomy
• Self-binding declaration addict due to relapses in alcohol use
• Conditions were met Judicial Coercion
Conclusions Autonomy Model
• We created moral reasoning system including twofold approach of autonomy
• System matches decisions medical ethical experts• System matches decisions law cases
• By using theories of (medical) ethics, we can build robots that stimulate autonomy
15London, 03-04-2014MEMCA-14 Symposium at AISB50
Limitations rational moral reasoning
• Only moral reasoning results in very cold decision-making, only in terms of rights and duties
• Wallack, Franklin & Allen (2010): “Ethical agents require emotional intelligence as well as other ‘supra-rational’ faculties, such as a sense of self and a ‘Theory of Mind”
• Tronto (1993): “Care is only thought of as good care when it is personalized”
16London, 03-04-2014MEMCA-14 Symposium at AISB50
Problem: Not Able to SimulateTrolley Dilemma vs Footbridge Dilemma
• Greene et al. (2001) find that moral dilemmas vary systematically in the extent to which they engage emotional processing and that these variations in emotional engagement influence moral judgment.
• Their study was inspired by the difference between two variants of an ethical dilemma:
Trolley dilemma (moral impersonal)
Footbridge dilemma (moral personal)
17London, 03-04-2014MEMCA-14 Symposium at AISB50
Solution: Add Emotional Processing
• Previously, we developed Silicon Coppelia,
a model of emotional intelligence. • This can be projected in others for Theory of Mind• Learns from experience Personalization
Connect Moral Reasoning to Silicon Coppelia• More human-like moral reasoning• Personalize moral decisions and communication
about moral reasoning
18London, 03-04-2014MEMCA-14 Symposium at AISB50
Silicon Coppelia
19London, 03-04-2014MEMCA-14 Symposium at AISB50
Silicon Coppelia
• We developed Silicon Coppelia, with the goal to create emotionally human-like robots
• Simulation experiments:
System behaves consistent with Theory and Intuition
• Compare performance model with
performance real human in speeddating experiment
London, 03-04-2014 MEMCA-14 Symposium at AISB50 20
Turing Test
• Turing Test was originally text-based• We enriched test with affect-laden communication
• Facial expressions showing emotions• Capable of vocal speech
• Afterwards questionnaire:
How do you think Tom perceived you?
Measure made continuous and more elaborated than simply yes/no
• Analysis: Bayesian structural equation modeling
London, 03-04-2014 MEMCA-14 Symposium at AISB50 21
Speed-dating Experiment
London, 03-04-2014MEMCA-14 Symposium at AISB50
Results
• Participants did not detect differences on single variables
• Participants did not recognize significant differences on cognitive-affective structure
• Model in which conditions (1: human, 2: robot) were assumed equal explained data better than model in which conditions were assumed different
London, 03-04-2014 MEMCA-14 Symposium at AISB50 23
Conclusions Speed-Date
• We created simulation of affect so natural that young women could not discern dating a robot from a man
• Important for:• Understanding human affective communication• Developing communication technologies • Developing emotionally human-like robots
London, 03-04-2014 MEMCA-14 Symposium at AISB50 24
Silicon Coppelia + Moral Reasoning:
Decisions based on:
1. Rational influences• Does action help me to reach my goals?
2. Affective influences• Does action lead to desired emotions?• Does action reflect Involvement I feel towards user?• Does action reflect Distance I feel towards user?
3. Moral reasoning• Is this action morally good?
London, 03-04-2014 MEMCA-14 Symposium at AISB50 25
Results Trolley & Footbridge
Kill 1 to Save 5 Do Nothing
Moral system
Trolley X
Footbridge X
Moral Coppelia
Trolley X
Footbridge X
26London, 03-04-2014MEMCA-14 Symposium at AISB50
Background Criminology Study• Substantial evidence emotions are fundamental in
criminal decision making• But emotions rarely in criminal choice models
Study relation Ratio+Emotions+Moral
Apply Moral Coppelia to criminology data
Predict criminal decisions participants
London, 03-04-2014 MEMCA-14 Symposium at AISB50 27
Matching data to model
Match:
• Honesty/Humility to Weightmorality
• Perceived Risk to Expected Utility• Negative State Affect to EESA
Parameter Tuning:
1. Find optimal fits for initial sample
2. Predict decisions for holdout sample
London, 03-04-2014 MEMCA-14 Symposium at AISB50 28
Predicting Criminal Choices
London, 03-04-2014 MEMCA-14 Symposium at AISB50 29
Ratio Emo R+E Moral M+R M+E M+R+E
morcc 0 0 0 0.68 0.42 0.453 0.435
wmor 0 0 0 1.00 0.96 0.97 0.87
partrat 1 0 0.34 0 1 0 0.64
partemo 0 1 0.66 0 0 1 0.36
R2 initial
0.7553 0.8792 0.9222 0.9336 0.9871 0.9798 0.9881
R2 holdout
0.7192 0.9060 0.9323 0.9281 0.9803 0.9778 0.9821
Conclusions
• We created an affective moral reasoning system • System matches decisions medical ethical experts• System matches decisions law cases• By using theories of (medical) ethics,
we can build robots that stimulate autonomy• System can simulate trolley and footbridge dilemma• System can predict human criminal choices
30London, 03-04-2014MEMCA-14 Symposium at AISB50
Discussion
• The introduction of affect in rational ethics is important when robots communicate with humans
• Combination Ratio + Affect + Morals useful for applications that simulate human decision making
for example, when agent systems or robots provide healthcare support, or in entertainment settings
London, 03-04-2014 MEMCA-14 Symposium at AISB50 31
Future Work: Apply in politics
London, 03-04-2014 MEMCA-14 Symposium at AISB50
• Personal freedom• Privacy• Human rights• Transparency• Citizen participation• Evidence-based policy• Science & Education• Freedom of information• Open access / Open data / Open source
• Elections European Parliament: 22.05.2014
Thank you!
33
Matthijs Pontier
http://camera-vu.nl/matthijs
http://www.linkedin.com/in/matthijspontier• @Matthijs85
http://www.piratenpartij.nl/ • @Piratenpartij
London, 03-04-2014MEMCA-14 Symposium at AISB50
Top Related