1 Computational Models for Argumentation in MAS Leila Amgoud IRIT – CNRS France [email protected].
-
Upload
austyn-hewes -
Category
Documents
-
view
215 -
download
2
Transcript of 1 Computational Models for Argumentation in MAS Leila Amgoud IRIT – CNRS France [email protected].
2
Outline
Introduction to MAS Fundamentals of argumentation Argumentation in MAS Conclusions
3
The notion of agent (Wooldridge
2000)
An agent is a computer system that is capable of autonomous (i.e. independent) action on behalf of its user or owner (figuring out what needs to be done to satisfy design objectives, rather constantly being told)
Rationality: agent will act in order to achieve its goals, and will not act in such a way as to prevent its goals being achieved — at least insofar as its beliefs permit
4
The notion of agent
An agent needs the ability to make internal reasoning:
Reasoning about beliefs, desires, … Handling inconsistencies Making decisions Generating, revising, and selecting goals ...
5
Multi-agent systems (Wooldrige
2000)
A multi-agent system is one that consists of a number of agents, which interact with one another
Generally, agents will be acting on behalf of users of different goals and motivations
To successfully interact, they will require the ability to cooperate, coordinate, and negotiate with each other
6
Multi-agent systems
Agents need to: exchange information and explanations resolve conflicts of opinions resolve conflicts of interests make joint decisions
they need to engage in dialogues
7
Dialogue types (Walton & Krabbe
1995)
8
The role of argumentation Argumentation plays a key role for achieving
the goals of the above dialogue types
Argument = Reason for some conclusion (belief, action, goal, etc.)
Argumentation = Reasoning about arguments decide on conclusion
Dialectical argumentation = Multi-party argumentation through dialogue
9
The role of argumentation
Argumentation plays a key role for reaching agreements:
Additional information can be exchanged The opinion of the agent is explicitly explained (e.g.
arguments in favor of opinions or offers, arguments in favor of a rejection or an acceptance)
Agents can modify/revise their beliefs / preferences / goals
To influence the behavior of an agent (threats, rewards)
10
A persuasion dialogue
P : The newspapers have no right to publish information I.
C : Why?
P : Because it is about X's private life and X does not agree (P1)
C : The information I is not private because X is a minister and all
information concerning ministers is public (C1)
P : But X is not a minister since he resigned last month (P2)
P2 C1 P1
11
A negotiation dialogue
Buyer: Can’t you give me this 806 a bit cheaper?
Seller: Sorry that’s the best I can do. Why don’t you go for a Polo instead?
Buyer: I have a big family and I need a big car (B1)
Seller: Modern Polo are becoming very spacious and would easily fit in a big family. (S1)
Buyer: I didn’t know that, let’s also look at Polo then.
12
Why study argumentation in agent technology?
For internal reasoning of single agents: Reasoning about beliefs, goals, ... Making decisions Generating, revising, and selecting goals
For interaction between multiple agents: Exchanging information and explanations Resolving conflicts of opinions Resolving conflicts of interests Making joint decisions
13
Outline
Introduction to MAS Fundamentals of argumentation Argumentation in MAS Conclusions
14
Defeasible reasoning
Reasoning is generally defeasible Assumptions, exceptions, uncertainty, ...
AI formalises such reasoning with non-monotonic logics
Default logic, etc … New premisses can invalidate old conclusions
Argumentation logics formalise defeasible reasoning as construction and comparison of arguments
15
Argumentation process
Defining the interactions between arguments
Evaluating the strengths of arguments
Defining the status of arguments
Drawing conclusions using a consequence relation
Comparing decisions using a given principle
Inference problem Decision making problem
Constructing arguments
16
Main challenges
Q1: What are the different types of arguments ?
How do we construct arguments ?
Q2: How can an argument interact with another argument ?
Q3: How do we compute the strength of an argument ?
Q4: How do we determine the status of arguments
Q5: How do we conclude ?
How decisions are compared on the basis of their arguments?
Q6: What are the properties that an argumentation system should satisfy ?
17
Q1: Building arguments
Types of arguments: (Kraus et al. 98, Amgoud & Prade 05)
Explanations (involve only beliefs) Tweety flies because it is a bird
Threats (involve beliefs + goals) You should do otherwise I will do You should not do otherwise I will do
Rewards (involve beliefs + goals) If you do , I will do If you don’t do , I will do
…
18
Q1: Building arguments
Forms of arguments:
An inference tree grounded in premises
A deduction sequence
A pair (Premises, Conclusion), leaving unspecified the particular proof that leads from the Premises to the Conclusion
20
Q1: Building arguments
A: ({p, p b, b f}, f) B: ({p, pf}, f)
p: pinguin
b: bird
f: fly
p pb
pf
bf
1
2
3
21
Q1: Building argumentsExample 2. (Decision problem)
= a propositional knowledge base G = a goals base D = a set of decision options
An argument in favor of a decision d is a triple A = <S, g, d> s.t.1. d D 2. g G3. S 4. S {d} is consistent5. S {d} |- g6. S is minimal (set ) satisfying the above conditions
22
Q1: Building arguments
r: rain w: wet c: cloud u: umbrella l: overloaded
D = {u, ¬u}
A = <{u ¬w}, {¬w}, u> B = <{¬u ¬l}, {¬l}, ¬u>
c, u l ¬u ¬lu ¬wr ¬u w¬r ¬w
c r
1
¬w
¬l
G
1
23
Q2: Interactions between arguments
Three conflict relations:
Rebutting attacks: two arguments with contradictory conclusions
Assumption attacks: an argument attacks an assumption of another argument
Undercutting attacks: an argument undermines some intermediate step (inference rule) of another argument
24
Rebutting attacks
Tweety flies because it is a bird
versus
Tweety does not fly because it is a penguin
Tweety flies ¬Tweety flies
25
Assumption attacks
Tweety flies because it is a bird, and it is not provable that Tweety is a penguin
versus
Tweety is a penguin
Tweety flies Penguin Tweety
Not(Penguin Tweety)
26
Undercutting attack
An argument challenges the connection between the premisses and the conclusion
Tweety flies because all the birds I ’ve seen fly
a b c
d
I ’ve seen Opus, it is a bird and it does not fly
¬[a, b, c /d]
27
Q3: Strengths of arguments
Why do we need to compute the strengths of arguments ?
To compare arguments To refine the status of arguments by
removing some attacks
To define decision principles
28
Q3: Strengths of arguments
The strength of an argument depends on the quality of
information used to build that argument
Examples: Weakest link principle (Benferhat & al. 95, Amgoud 96)
Last link principle (Prakken & Sartor 97)
Specificity principle (Simari & Loui 92)
...
Preference relation between
data
Strength of an argument
Preference relation between arguments
29
Q3: Strengths of arguments
Example 1. (Weakest link principle)
A: ({p, pb, b f}, f)
B: ({p, pf}, f)
Strength(A) = 3Strength(B) = 2
Then B is preferred to (stronger than) A
p pb
pf
bf
1
2
3
30
Q3: Strengths of arguments
Example 2.
A = <{u ¬w}, {¬w}, u> B = <{¬u ¬l}, {¬l}, ¬u>
Strength(A) = (1, 1) Strength(B) = (1, )
Different preference relations between such arguments are defined (Amgoud, Prade 05)
c, u l ¬u ¬lu ¬wr ¬u w¬r ¬w
c r
1
K
¬w
¬l
G
1
31
Q4: Status of arguments
Some attacks can be removed
Defeat = Attack + Preference relation between arguments
Attacking and not weaker Defeat
Attacking and stronger Strict DefeatA B
<A B
>
A does not defeat B A strictly defeats B
32
Q4: Status of arguments
Given <Args, Defeat>, what is the status of a given argument A Args?
Three classes of arguments
Arguments with which a dispute can be won (justified)
Arguments with which a dispute can be lost (rejected)
Arguments that leave the dispute undecided
33
Q4: Status of arguments
Two ways for computing the status of arguments:
The declarative form usually requires fixed-point definitions, and establishes certain sets of arguments as acceptable Acceptability semantics
The procedural form amounts to defining a procedure for testing whether a given argument is a member of « a set of acceptable arguments » Proof theory
34
Acceptability semantics
Semantics = specifies conditions for labelling the argument graph
The labelling should: accept undefeated arguments capture the notion of reinstatement
A B C
A reinstates C
35
Acceptability semantics
Example of labelling: L: Args {in, out, und}
An argument is in if all its defeaters are out
An argument is out if it has a defeater that is in
An argument is und otherwise
36
Acceptability semantics Example 1:
Only one possible labelling:
A B C
A B C
in
out
37
Acceptability semantics Example 2:
Two possible labellings:
A B
A B A Band
38
Acceptability semantics
Two approaches:
A unique status approach An argument is justified iff it is in An argument is rejected if it is out An argument is undecided it is und
A multiple status approach An argument is justified iff it is in in any labelling An argument is rejected if it is out in any labelling An argument is undecided it is in in some labelling and out in
others
39
Acceptability semantics
Unique status: Grounded semantics (Dung 95)
E1 = all undefeated arguments E2 = E1 + all arguments reinstated by E1
…
It exists only if there are undefeated arguments
40
Acceptability semantics
Problem with grounded semantics: floating arguments
A B
C
D
A B
C
D
We want
41
Acceptability semantics
Multiple labellings:
A B
C
D
A B
C
D
D is justified and C is rejected
42
Proof theories
Let <Args, Defeat> be an AS S1, …, Sn its extensions under a given
semantics.
Problem: Let a Args Is a in one extension ? Is a in every extension ?
43
Proof theories
Let a Args. Problem: Is a in the grounded extension ?
Example: A0
A4 A5
A3A1 A2
A6
44
Proof theories (Amgoud & Cayrol
00)
A dialogue is a non-empty sequence of moves s.t:
Movei = (Playeri, Argi) (i 0) where:
Playeri = P iff i is even, Playeri = C iff i is odd
Player0 = P and Arg0 = a
If Playeri = Playerj = P and i j then Argi Argj
If Playeri = P (i > 1) then Argi strictly defeats Argi-1
If Playeri = C then Argi defeats Argi-1
45
Proof theories (Amgoud & Cayrol
00)
A dialogue tree is a finite tree where each branch is a dialogue
P
C
C
P
A0
A4 A5
A3A1 A2
A6
A0
A4 A5
A3A1 A2
A6
Dialogue tree<Args, Defeat>
46
Proof theories
A player wins a dialogue iff it ends the dialogue
P
C
C
P
A0
A4 A5
A3A1 A2
A6
won by P
won by P
won by C
47
Proof theories
A candidate sub-tree is a sub-tree of the dialogue tree containing all the edges of an even move (P) and exactly one edge of an odd move (C)
A solution sub-tree is a candidate subtree whose branches are all won by P
P wins a dialogue tree iff the dialogue tree has a solution sub-tree
Complete construction:‘ a ’ the grounded extension iff a dialogue tree whose
root is ‘ a ’ and won by P
48
Proof theories
Two candidate sub-trees:
Each branch of S2 is won by P S2 is a solution sub-tree A0 is in the grounded extension
P
C
C
P
A0
A4 A5
A3A1 A2
A6
A0
A4 A5
A2 A3
A6S1
A0
A4 A5
A1 A3
S2
49
Q5: Consequence relations
: a knowledge base built from a logical language L, x: a formula of L <Args, Defeat>: an argumentation system S1, …, Sn: the extensions under a given semantics.
|~ x iff an argument A for x s.t. A Si, Si, i = 1, …, n |~ x iff Si, an argument A for x, and A Si
|~ x iff Si st an argument A for x and A Si, and
Sj st an argument A for x and A Si |~ x iff Si st an argument A for x and A Si
50
Q5: Making decisions D = a set of decision options Problem = to define a preordering on D <Args, Defeat> = an argumentation system
Let d D <P1, …, Pn, C1, …, Cm>
Arg. PRO d Arg. CON d
51
Q5: Making decisions
ArgP(d) = the arguments in E which are PRO d ArgC(d) = the arguments in E which are CON d
ArgsE = Acceptable arguments
52
Q5: Making decisions
Decision principles:
3 categories of principles: (Amgoud, Prade 2004-2006)
Unipolar principles = only one kind of arguments (PRO or CON) is involved
Bipolar principles = both arguments PRO and CON are involved
Non-polar principles
53
Q5: Making decisions
Unipolar principles:
Let d, d’ D. Counting arguments PRO: d d’ iff |ArgP(d)| > |ArgP(d’)| Counting arguments CON: d d’ iff |ArgC(d)| < |ArgC(d’)|
Promotion focus: d d’ iff P ArgP(d) st. P’ ArgP(d’),
P is stronger than P’. Prevention focus: d d’ iff C’ ArgC(d’) st. C ArgC(d),
C’ is stronger than C.
54
Q5: Making decisions
Bipolar principles:
Let d, d’ D.
d d’ iff P ArgP(d) s.t. P’ ArgP(d’), P is stronger than P’, and C’ ArgC(d’) s.t. C ArgC(d), C’ is stronger than C
55
Q5: Making decisions
Non-polar principles:
Let d, d’ D
<P1, …, Pn, C1, …, Cm> <P’1, …, P’k, C’1, …, C’l>
’
d d’ iff is stronger than ’
56
Q5: Rationality postulatesIdea: What are the properties/rationality postulates
that any AS should satisfy ? (Amgoud, Caminada 05)
Consistency = AS should ensure safe conclusions the set {x | |~ x } should be consistent the set of conclusions of each extension should be
consistent
Closedness = AS should not forget safe conclusions the set {x | |~ x } should be closed the set of conclusions of each extension should be
closed
57
Outline
Introduction to MAS Fundamentals of argumentation Argumentation in MAS Conclusions
58
Dialogue systems
…………………
CS1 CSn …………………
Claim p
Argue (S, p)
….
An argumentation system for evaluating the outcome of the dialogue
59
Components of a dialogue system
Communication language + Domain language
Protocol = the set of rules for generating coherent dialogues
Agent Strategies = the set of tactics used by the agents to choose a move to play
Outcome One of a set of possible deals, or Conflict
Protocol + Strategies Outcome
60
Communication language
A syntax = a set of locutions, utterances or speech acts (Propose, Argue, Accept, Reject, etc.)
A semantics = a unique meaning for each utterance Mentalistic approaches Social approaches Protocol-based approaches
61
Dialogue Protocol
Protocol is public and independent from the mental states of the agents
Main parameters: the set of allowed moves (e.g. Claim, Argue, ...) the possible replies for each move the number of moves per turn the turntaking the notion of Backtracking
the computation of the outcome
Identifies more or less rich dialogues
62
Dialogue Protocol
Computing the outcome: two approaches
The protocol is equipped with an argumentation system that evaluates the content of CS1 … CSn:
<Args(CS1 … CSn), Defeat>, or<Args, Defeat>
The rules of the proof theory are encoded in the protocol
63
Dialogue Protocol
For persuasion dialogues, the two approaches return the same result if:
the other parameters are fixed in the same way
the acceptability semantics used is the same
64
Dialogue Strategies
BDI agent Protocol CS1 … CSn
Set of allowed replies
Argumentation-based decision model
Next move to play
Move = Locution + Content
locution
argument type
argument
offer
65
Dialogue Strategies
Different arguments are exchanged: about beliefs about goals
Eg. I have a big family and I need a big car referring to plans (instrumental
arguments)Eg. Modern Polo are becoming very spacious and would easily fit in a big familyWhich argument to present and when?
Need of a formal model for practical reasoning
66
Dialogue Strategies
Dialogue strategies are at early stages
Thus, not possible yet to characterize the outcome of the dialogue, i.e. when the outcome is optimal, etc.
67
Open issues
How goals are generated ? How / when are they revised ?
Do we always privilege new goals ? The answer = NO
Threats ---> the goal can be adopted Rewards ---> the goal can be ignored
AGM postulates for revising goals ?
68
Thank you