Introduction to Artificial Intelligence Artificial Intelligence Section 4 Mr. Sciame.
Argumentation in Artificial Intelligence
-
Upload
federico-cerutti -
Category
Science
-
view
232 -
download
3
Transcript of Argumentation in Artificial Intelligence
![Page 1: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/1.jpg)
argumentation in artificial intelligence20 Years After Dung’s Work.
Federico Cerutti†
xxvi • vii • mmxv
† University of Aberdeen
![Page 2: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/2.jpg)
P. BaroniU. Brescia
T. J. M. Bench-CaponU. Liverpool
C. CayrolIRIT
P. E. DunneU. Liverpool
M. GiacominU. Brescia
A. HunterUCL
H. LiU. Aberdeen
S. ModgilKCL
T. J. NormanU. Aberdeen
N. OrenU. Aberdeen
C. ReedU. Dundee
G. R. SimariU. Nacional der Sur
A. TonioloU. Aberdeen
M. VallatiU. Huddersfield
S. WoltranTU Wien
J. LeiteNew U. Lisbon
S. ParsonKCL
M. ThimmU. Koblenz
![Page 3: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/3.jpg)
This tutorial was sponsored by the U.S. Army Research Laboratory and the U.K.Ministry of Defence, under Agreement Number W911NF-06-3-0001. The viewsand conclusions contained in this document are those of the author(s) andshould not be interpreted as representing the official policies, either expressedor implied, of the U.S. Army Research Laboratory, the U.S. Government, the U.K.Ministry of Defence or the U.K. Government. The U.S. and U.K. Governments areauthorized to reproduce and distribute reprints for Government purposesnotwithstanding any copyright notation hereon.
The tutor acknowledges the contribution of the Santander Universities Networkin supporting his travel
![Page 4: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/4.jpg)
outline
∙ Introduction Why bother?
∙ Dung’s AF
Syntax, semantics, current state of research
∙ Argumentation Schemes
Arguments in human experience
∙ A Semantic-Web view of Argumentation
AIF, OVA+, and other tools
∙ Frameworks
Abstract, instantiated, probabilistic frameworks: kite-level view
∙ CISpaces
„One Ring to bring them all and in the darkness bind them”
∙ Algorithms and Implementations
…and how to choose among them
∙ The frontier
![Page 5: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/5.jpg)
outline
∙ Introduction
Why bother?
∙ Dung’s AF Syntax, semantics, current state of research
∙ Argumentation Schemes
Arguments in human experience
∙ A Semantic-Web view of Argumentation
AIF, OVA+, and other tools
∙ Frameworks
Abstract, instantiated, probabilistic frameworks: kite-level view
∙ CISpaces
„One Ring to bring them all and in the darkness bind them”
∙ Algorithms and Implementations
…and how to choose among them
∙ The frontier
![Page 6: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/6.jpg)
outline
∙ Introduction
Why bother?
∙ Dung’s AF
Syntax, semantics, current state of research
∙ Argumentation Schemes Arguments in human experience
∙ A Semantic-Web view of Argumentation
AIF, OVA+, and other tools
∙ Frameworks
Abstract, instantiated, probabilistic frameworks: kite-level view
∙ CISpaces
„One Ring to bring them all and in the darkness bind them”
∙ Algorithms and Implementations
…and how to choose among them
∙ The frontier
![Page 7: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/7.jpg)
outline
∙ Introduction
Why bother?
∙ Dung’s AF
Syntax, semantics, current state of research
∙ Argumentation Schemes
Arguments in human experience
∙ A Semantic-Web view of Argumentation AIF, OVA+, and other tools
∙ Frameworks
Abstract, instantiated, probabilistic frameworks: kite-level view
∙ CISpaces
„One Ring to bring them all and in the darkness bind them”
∙ Algorithms and Implementations
…and how to choose among them
∙ The frontier
![Page 8: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/8.jpg)
outline
∙ Introduction
Why bother?
∙ Dung’s AF
Syntax, semantics, current state of research
∙ Argumentation Schemes
Arguments in human experience
∙ A Semantic-Web view of Argumentation
AIF, OVA+, and other tools
∙ Frameworks Abstract, instantiated, probabilistic frameworks: kite-level view
∙ CISpaces
„One Ring to bring them all and in the darkness bind them”
∙ Algorithms and Implementations
…and how to choose among them
∙ The frontier
![Page 9: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/9.jpg)
outline
∙ Introduction
Why bother?
∙ Dung’s AF
Syntax, semantics, current state of research
∙ Argumentation Schemes
Arguments in human experience
∙ A Semantic-Web view of Argumentation
AIF, OVA+, and other tools
∙ Frameworks
Abstract, instantiated, probabilistic frameworks: kite-level view
∙ CISpaces „One Ring to bring them all and in the darkness bind them”
∙ Algorithms and Implementations
…and how to choose among them
∙ The frontier
![Page 10: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/10.jpg)
outline
∙ Introduction
Why bother?
∙ Dung’s AF
Syntax, semantics, current state of research
∙ Argumentation Schemes
Arguments in human experience
∙ A Semantic-Web view of Argumentation
AIF, OVA+, and other tools
∙ Frameworks
Abstract, instantiated, probabilistic frameworks: kite-level view
∙ CISpaces
„One Ring to bring them all and in the darkness bind them”
∙ Algorithms and Implementations …and how to choose among them
∙ The frontier
![Page 11: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/11.jpg)
outline
∙ Introduction
Why bother?
∙ Dung’s AF
Syntax, semantics, current state of research
∙ Argumentation Schemes
Arguments in human experience
∙ A Semantic-Web view of Argumentation
AIF, OVA+, and other tools
∙ Frameworks
Abstract, instantiated, probabilistic frameworks: kite-level view
∙ CISpaces
„One Ring to bring them all and in the darkness bind them”
∙ Algorithms and Implementations
…and how to choose among them
∙ The frontier
![Page 12: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/12.jpg)
what is missing
A lot
Dialogues
Argumentation and trust
Argumentation in multi-agent systems
Several approaches to represent arguments
Several extensions to Dung’s framework
Several frontier approaches
…
![Page 13: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/13.jpg)
..why bother?
![Page 14: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/14.jpg)
There is no milk in the shop and the milk you have is sour.
Beer Milk1 0
![Page 15: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/15.jpg)
There is a coffee machine and fresh coffee in the cupboard.Beer makes you sick
Beer Milk Coffee?
0 0 1
![Page 16: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/16.jpg)
There is fresh milk in your bag because you went to the shop earlier.The Principal is visiting later today, so you had better not alcohol
Beer Milk
0 1
![Page 17: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/17.jpg)
There is no milk in the shop and the milk you have is sour.
There is a coffee machine and fresh coffee in the cupboard.Beer makes you sick
There is fresh milk in your bag because you went to the shop earlier.The Principal is visiting later today, so you had better not alcohol
Beer Milk Coffee?1 00 0 10 1
![Page 18: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/18.jpg)
You should drink milk
You should drink beer
There is no milk in the shop and the milk you have is sour.
There is a coffee machine and fresh coffee in the cupboard.
Beer makes you sick
You should drink coffee
There is fresh milk in your bag because you went to the shop earlier.
The Principal is visiting later today, so you had better not
![Page 19: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/19.jpg)
..dung’s argumentation framework
![Page 20: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/20.jpg)
[Dun95]
![Page 21: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/21.jpg)
Definition 1
A Dung argumentation framework AF is a pair
⟨A,→ ⟩
where A is a set of arguments, and→ is a binary relation on A i.e. →⊆ A×A.
![Page 22: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/22.jpg)
A semantics is a way to identify sets of arguments (i.e. extensions)“surviving the conflict together”
![Page 23: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/23.jpg)
(some) semantics properties
[BG07] [BCG11]
![Page 24: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/24.jpg)
(some) semantics properties
∙ Conflict-freeness (Def. 2)an attacking and an attacked argument can not stay together (∅ is c.f. by def.)
∙ Admissibility (Def. 5)
the extension should be able to defend itself, „fight fire with fire” (∅ is adm. by def.)
∙ Strong-Admissibility (Def. 7)
no self-defeating arguments (∅ is strong adm. by def.)
∙ Reinstatement (Def. 8)
if you defend some argument you should take it on board (∅ satisfies the principleonly if there are no unattacked arguments)
∙ I-Maximality (Def. 9)
no extension is a proper subset of another one
∙ Directionality (Def. 12)
a (set of) argument(s) is affected only by its ancestors in the attack relation
![Page 25: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/25.jpg)
(some) semantics properties
∙ Conflict-freeness (Def. 2)
an attacking and an attacked argument can not stay together (∅ is c.f. by def.)
∙ Admissibility (Def. 5)the extension should be able to defend itself, „fight fire with fire” (∅ is adm. by def.)
∙ Strong-Admissibility (Def. 7)
no self-defeating arguments (∅ is strong adm. by def.)
∙ Reinstatement (Def. 8)
if you defend some argument you should take it on board (∅ satisfies the principleonly if there are no unattacked arguments)
∙ I-Maximality (Def. 9)
no extension is a proper subset of another one
∙ Directionality (Def. 12)
a (set of) argument(s) is affected only by its ancestors in the attack relation
![Page 26: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/26.jpg)
(some) semantics properties
∙ Conflict-freeness (Def. 2)
an attacking and an attacked argument can not stay together (∅ is c.f. by def.)
∙ Admissibility (Def. 5)
the extension should be able to defend itself, „fight fire with fire” (∅ is adm. by def.)
∙ Strong-Admissibility (Def. 7)no self-defeating arguments (∅ is strong adm. by def.)
∙ Reinstatement (Def. 8)
if you defend some argument you should take it on board (∅ satisfies the principleonly if there are no unattacked arguments)
∙ I-Maximality (Def. 9)
no extension is a proper subset of another one
∙ Directionality (Def. 12)
a (set of) argument(s) is affected only by its ancestors in the attack relation
![Page 27: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/27.jpg)
(some) semantics properties
∙ Conflict-freeness (Def. 2)
an attacking and an attacked argument can not stay together (∅ is c.f. by def.)
∙ Admissibility (Def. 5)
the extension should be able to defend itself, „fight fire with fire” (∅ is adm. by def.)
∙ Strong-Admissibility (Def. 7)
no self-defeating arguments (∅ is strong adm. by def.)
∙ Reinstatement (Def. 8)if you defend some argument you should take it on board (∅ satisfies the principleonly if there are no unattacked arguments)
∙ I-Maximality (Def. 9)
no extension is a proper subset of another one
∙ Directionality (Def. 12)
a (set of) argument(s) is affected only by its ancestors in the attack relation
![Page 28: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/28.jpg)
(some) semantics properties
∙ Conflict-freeness (Def. 2)
an attacking and an attacked argument can not stay together (∅ is c.f. by def.)
∙ Admissibility (Def. 5)
the extension should be able to defend itself, „fight fire with fire” (∅ is adm. by def.)
∙ Strong-Admissibility (Def. 7)
no self-defeating arguments (∅ is strong adm. by def.)
∙ Reinstatement (Def. 8)
if you defend some argument you should take it on board (∅ satisfies the principleonly if there are no unattacked arguments)
∙ I-Maximality (Def. 9)no extension is a proper subset of another one
∙ Directionality (Def. 12)
a (set of) argument(s) is affected only by its ancestors in the attack relation
![Page 29: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/29.jpg)
(some) semantics properties
∙ Conflict-freeness (Def. 2)
an attacking and an attacked argument can not stay together (∅ is c.f. by def.)
∙ Admissibility (Def. 5)
the extension should be able to defend itself, „fight fire with fire” (∅ is adm. by def.)
∙ Strong-Admissibility (Def. 7)
no self-defeating arguments (∅ is strong adm. by def.)
∙ Reinstatement (Def. 8)
if you defend some argument you should take it on board (∅ satisfies the principleonly if there are no unattacked arguments)
∙ I-Maximality (Def. 9)
no extension is a proper subset of another one
∙ Directionality (Def. 12)a (set of) argument(s) is affected only by its ancestors in the attack relation
![Page 30: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/30.jpg)
You should drink milk
You should drink beer
There is no milk in the shop and the milk you have is sour.
There is a coffee machine and fresh coffee in the cupboard.
Beer makes you sick
You should drink coffee
There is fresh milk in your bag because you went to the shop earlier.
The Principal is visiting later today, so you had better not
![Page 31: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/31.jpg)
You should drink milk
You should drink beer
There is no milk in the shop and the milk you have is sour.
There is a coffee machine and fresh coffee in the cupboard.
Beer makes you sick
You should drink coffee
There is fresh milk in your bag because you went to the shop earlier.
The Principal is visiting later today, so you had better not
ba
c
d
fe
gh
![Page 32: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/32.jpg)
complete extension (def. 15)
Admissibility and reinstatement
Set of conflict-free arguments s.t. each defended argument is included
b a
cd
f e
gh
{a, c,d, e, g},{a,b, c, e, g},{a, c, e, g}
![Page 33: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/33.jpg)
grounded extension (def. 16)
Strong Admissibility
Minimum complete extension
b a
cd
f e
gh
{a, c, e, g}
![Page 34: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/34.jpg)
preferred extension (def. 17)
Admissibility and maximality
Maximum complete extensions
b a
cd
f e
gh
{a, c,d, e, g},{a,b, c, e, g}
![Page 35: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/35.jpg)
stable extension (def. 17)
„orror vacui:” the absence of odd-length cycles is a sufficient condition for existence ofstable extensions
Complete extensions attacking all the arguments outside
b a
cd
f e
gh
{a, c,d, e, g},{a,b, c, e, g}
![Page 36: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/36.jpg)
complete labellings (def. 20)
Max. UNDEC ≡ Grounded
b a
cd
f e
gh
{a, c, e, g}
![Page 37: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/37.jpg)
complete labellings (def. 20)
Max. IN ≡ Preferred
b a
cd
f e
gh
{a, c,d, e, g}
![Page 38: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/38.jpg)
complete labellings (def. 20)
Max. IN ≡ Preferred
b a
cd
f e
gh
{a,b, c, e, g}
![Page 39: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/39.jpg)
complete labellings (def. 20)
No UNDEC ≡ Stable
b a
cd
f e
gh
{a, c,d, e, g}
![Page 40: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/40.jpg)
complete labellings (def. 20)
No UNDEC ≡ Stable
b a
cd
f e
gh
{a,b, c, e, g}
![Page 41: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/41.jpg)
properties of semantics
CO GR PR STD-conflict-free Yes Yes Yes YesD-admissibility Yes Yes Yes YesD-strongly admissibility No Yes No NoD-reinstatement Yes Yes Yes YesD-I-maximality No Yes Yes YesD-directionality Yes Yes Yes No
![Page 42: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/42.jpg)
complexity
[DW09]
![Page 43: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/43.jpg)
complexity
σ = CO σ = GR σ = PR σ = STexistsσ trivial trivial trivial np-ccaσ np-c polynomial np-c np-csaσ polynomial polynomial Πp
2 -c conp-cverσ polynomial polynomial conp-c polynomialneσ np-c polynomial np-c np-c
![Page 44: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/44.jpg)
an exercise
a
b c d
e
f
g
h
i
l
m
no
p
![Page 45: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/45.jpg)
an exercise
a
b c d
e
f
g
h
i
l
m
no
p
ECO(∆) =
{a, c},{a, c, f},{a, c,m},{a, c, f,m},{a, c, f, l},{a, c, g,m}
![Page 46: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/46.jpg)
an exercise
a
b c d
e
f
g
h
i
l
m
no
p
EGR(∆) =
{a, c}
![Page 47: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/47.jpg)
an exercise
a
b c d
e
f
g
h
i
l
m
no
p
EPR(∆) =
{a, c, f,m},{a, c, f, l},{a, c, g,m}
![Page 48: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/48.jpg)
an exercise
a
b c d
e
f
g
h
i
l
m
no
p
EST (∆) =
![Page 49: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/49.jpg)
http://rull.dbai.tuwien.ac.at:8080/ASPARTIX/index.faces
![Page 50: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/50.jpg)
skepticisms and comparisons of sets of extensions
[BG09b]
![Page 51: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/51.jpg)
skepticisms and comparisons of sets of extensions
..GR.
CO
.
PR
.
ST
⪯S⊕ relation
Comparing extensions individually:
E1 ⪯E∩+ E2 iff ∀E2 ∈ E2, ∃E1 ∈ E1: E1 ⊆ E2 and E1 ⪯E
∪+ E2 iff ∀E1 ∈ E1, ∃E2 ∈ E2: E1 ⊆ E2
![Page 52: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/52.jpg)
signatures
[Dun+14]
![Page 53: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/53.jpg)
signatures
The signature of a semantics is the collection of all possible sets of extensions an AF canpossess under a semantics (Def. 25).
S ⊆ 2A:
∙ ArgsS =∪
S∈S S;∙ PairsS = {⟨a,b⟩ | ∃S ∈ S s.t. {a,b} ⊆ S}.
• • • • •
S = { { a,d, e },{ b, c, e },{ a,b } }
ArgsS = {a,b, c,d, e} PairsS = {⟨a,b⟩, ⟨a,d⟩, ⟨a, e⟩, ⟨b, c⟩, ⟨b, e⟩, ⟨c, e⟩, ⟨d, e⟩}
![Page 54: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/54.jpg)
signatures
∙ Incomparable (Def. 26): A ⊆ B iff A = B„Maximal”
∙ Tight (Def. 27): if S ∪ {a} ∈ S then ∃b ∈ S s.t. ⟨a,b⟩ ∈ PairsS
if an argument does not occur in some extension there must be a reason for that(typically a conflict)
∙ Adm-Closed (Def. 28): if ⟨a,b⟩ ∈ PairsS ∀a,b ∈ A ∪ B, A ∪ B ∈ S
„Admissibility”
Stable iff incomparable and tight
Preferred iff non-empty, incomparable and adm-closed
![Page 55: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/55.jpg)
signatures
∙ Incomparable (Def. 26): A ⊆ B iff A = B
„Maximal”
∙ Tight (Def. 27): if S ∪ {a} ∈ S then ∃b ∈ S s.t. ⟨a,b⟩ ∈ PairsSif an argument does not occur in some extension there must be a reason for that(typically a conflict)
∙ Adm-Closed (Def. 28): if ⟨a,b⟩ ∈ PairsS ∀a,b ∈ A ∪ B, A ∪ B ∈ S
„Admissibility”
Stable iff incomparable and tight
Preferred iff non-empty, incomparable and adm-closed
![Page 56: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/56.jpg)
signatures
∙ Incomparable (Def. 26): A ⊆ B iff A = B
„Maximal”
∙ Tight (Def. 27): if S ∪ {a} ∈ S then ∃b ∈ S s.t. ⟨a,b⟩ ∈ PairsS
if an argument does not occur in some extension there must be a reason for that(typically a conflict)
∙ Adm-Closed (Def. 28): if ⟨a,b⟩ ∈ PairsS ∀a,b ∈ A ∪ B, A ∪ B ∈ S„Admissibility”
Stable iff incomparable and tight
Preferred iff non-empty, incomparable and adm-closed
![Page 57: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/57.jpg)
signatures
S = { { a,d, e },{ b, c, e },{ a,b } }
incomparable and adm-closed (⟨a,b⟩ ∈ PairsS ∀a,b ∈ A ∪ B, A ∪ B ∈ S)
a
b
c
d
f e
![Page 58: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/58.jpg)
signatures
S = { { a,d, e },{ b, c, e },{ a,b } }
incomparable and adm-closed (⟨a,b⟩ ∈ PairsS ∀a,b ∈ A ∪ B, A ∪ B ∈ S)
a
b
c
d
f e
![Page 59: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/59.jpg)
exercise
S = { { a,d, e },{ b, c, e },{ a,b,d } }
Does an AF ∆ having EPR(∆) = S exist?
No
PairsS = {⟨a,b⟩, ⟨a,d⟩, ⟨a, e⟩, ⟨b, c⟩, ⟨b, e⟩, ⟨c, e⟩, ⟨d, e⟩, ⟨b,d⟩}
b,d ∈ { a,d, e } ∪ { a,b,d }but { a,d, e } ∪ { a,b,d } = { a,b,d, e } /∈ S
![Page 60: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/60.jpg)
exercise
S = { { a,d, e },{ b, c, e },{ a,b,d } }
Does an AF ∆ having EPR(∆) = S exist?
No
PairsS = {⟨a,b⟩, ⟨a,d⟩, ⟨a, e⟩, ⟨b, c⟩, ⟨b, e⟩, ⟨c, e⟩, ⟨d, e⟩, ⟨b,d⟩}
b,d ∈ { a,d, e } ∪ { a,b,d }but { a,d, e } ∪ { a,b,d } = { a,b,d, e } /∈ S
![Page 61: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/61.jpg)
decomposability
[Bar+14]
![Page 62: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/62.jpg)
decomposability
AF1
AF2AF3
Is it possible to consider a (partial) argumentation framework as a black-box and focusonly on the input/output interface?
![Page 63: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/63.jpg)
decomposability
A semantics is:
∙ Fully decomposable (Def. 35):∙ any combination of “local” labellings gives rise to a global labelling;∙ any global labelling arises from a set of “local” labellings
∙ Top-Down decomposable (Def. 36):combining “local” labellings you get all global labellings, possibly more
∙ Bottom-Up decomposable (Def. 37):combining “local” labellings you get only global labellings, possibly less
CO ST GR PRFull decomposability Yes Yes No No
Top-down decomposability Yes Yes Yes YesBottom-up decomposability Yes Yes No No
![Page 64: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/64.jpg)
decomposability
A semantics is:
∙ Fully decomposable (Def. 35):∙ any combination of “local” labellings gives rise to a global labelling;∙ any global labelling arises from a set of “local” labellings
∙ Top-Down decomposable (Def. 36):combining “local” labellings you get all global labellings, possibly more
∙ Bottom-Up decomposable (Def. 37):combining “local” labellings you get only global labellings, possibly less
CO ST GR PRFull decomposability Yes Yes No No
Top-down decomposability Yes Yes Yes YesBottom-up decomposability Yes Yes No No
![Page 65: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/65.jpg)
decomposability
A semantics is:
∙ Fully decomposable (Def. 35):∙ any combination of “local” labellings gives rise to a global labelling;∙ any global labelling arises from a set of “local” labellings
∙ Top-Down decomposable (Def. 36):combining “local” labellings you get all global labellings, possibly more
∙ Bottom-Up decomposable (Def. 37):combining “local” labellings you get only global labellings, possibly less
CO ST GR PRFull decomposability Yes Yes No No
Top-down decomposability Yes Yes Yes YesBottom-up decomposability Yes Yes No No
![Page 66: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/66.jpg)
decomposability
A semantics is:
∙ Fully decomposable (Def. 35):∙ any combination of “local” labellings gives rise to a global labelling;∙ any global labelling arises from a set of “local” labellings
∙ Top-Down decomposable (Def. 36):combining “local” labellings you get all global labellings, possibly more
∙ Bottom-Up decomposable (Def. 37):combining “local” labellings you get only global labellings, possibly less
CO ST GR PRFull decomposability Yes Yes No No
Top-down decomposability Yes Yes Yes YesBottom-up decomposability Yes Yes No No
![Page 67: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/67.jpg)
..argumentation schemes
![Page 69: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/69.jpg)
what is an argument?
Argumentation is a verbal,social, and rational activity aimedat convincing a reasonable critic ofthe acceptability of a standpoint byputting forward a constellation ofpropositions justifying or refutingthe proposition expressed in thestandpoint.
Some elements of dialogue in the handout, but theywill not be considered here.
![Page 70: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/70.jpg)
[WRM08]
![Page 71: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/71.jpg)
practical inference: an example of argumentation scheme
Premises:Goal Premise Bringing about Sn is my goalMeans Premise In order to bring about Sn, I need to bring about SiConclusions:
Therefore, I need to bring about Si.
Critical questions:Other-Means Q. Are there alternative possible actions to bring about Si that
could also lead to the goal?Best-Means Q. Is Si the best (or most favourable) of the alternatives?Other-Goals Q. Do I have goals other than Si whose achievement is prefer-
able and that should have priority?Possibility Q. Is it possible to bring about Si in the given circumstances?Side Effects Q. Would bringing about Si have known bad consequences that
ought to be taken into account?
![Page 72: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/72.jpg)
practical inference: an example of argumentation scheme
Premises:Goal Premise Bringing about Sn is my goalMeans Premise In order to bring about Sn, I need to bring about SiConclusions:
Therefore, I need to bring about Si.
Critical questions:Other-Means Q. Are there alternative possible actions to bring about Si that
could also lead to the goal?Best-Means Q. Is Si the best (or most favourable) of the alternatives?Other-Goals Q. Do I have goals other than Si whose achievement is prefer-
able and that should have priority?Possibility Q. Is it possible to bring about Si in the given circumstances?Side Effects Q. Would bringing about Si have known bad consequences that
ought to be taken into account?
![Page 73: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/73.jpg)
an example
GoalBringing about being rich is my goal I want to be rich
Means/PlanIn order to bring about being rich I needto bring about having a job
To be rich I need a job
ActionTherefore I need to bring about having ajob
Therefore I have to searchfor a job.
![Page 74: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/74.jpg)
an example
http://ova.arg-tech.org/
with
http://homepages.abdn.ac.uk/f.cerutti/pages/research/tutorialijcai2015/rich.html
![Page 75: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/75.jpg)
..a semantic-web view of argumentation
![Page 76: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/76.jpg)
[Rah+11]
![Page 77: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/77.jpg)
Node Graph(argumentnetwork)
has-a
InformationNode
(I-Node)
is-a
Scheme NodeS-Node
has-a
Edge
is-a
Rule of inferenceapplication node
(RA-Node)
Conflict applicationnode (CA-Node)
Preferenceapplication node
(PA-Node)
Derived conceptapplication node (e.g.
defeat)
is-a
...
ContextScheme
Conflictscheme
contained-in
Rule of inferencescheme
Logical inference scheme
Presumptiveinference scheme ...
is-a
Logical conflictscheme
is-a
...
Preferencescheme
Logical preferencescheme
is-a
...Presumptivepreference scheme
is-a
uses uses uses
![Page 79: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/79.jpg)
[Bex+13]
![Page 81: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/81.jpg)
http://www.arg-tech.org/AIFdb/argview/4879http://toast.arg-tech.org/
![Page 82: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/82.jpg)
..abstract argumentation frameworks
![Page 83: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/83.jpg)
Value Based AF
Extended AF AFRA
Bipolar AF
![Page 84: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/84.jpg)
value based argumentation framework
[BA09]
![Page 85: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/85.jpg)
value based argumentation framework
..a2LC, FC
.. a3LC, FH
..
a1LC
a1 Hal should not take insulin, thusallowing Carla to be alive (value ofLife for Carla LC);
a2 Hal should take insulin andcompensate Carla, thus both ofthem stay alive (value of Life forCarla, and the Freedom — of usingmoney — for Carla FC);
a3 Hal should take insulin and thatCarla should buy insulin, thusboth of them stay alive (value ofLife for Carla, and the Freedom —of using money — for Hal FH).
![Page 86: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/86.jpg)
Value Based AF
Extended AF AFRA
Bipolar AF
![Page 87: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/87.jpg)
extended argumentation framework
[Mod09]
![Page 88: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/88.jpg)
extended argumentation framework
a1 “Today will be dry in London sincethe BBC forecast sunshine”;
a2 “Today will be wet in London sinceCNN forecast rain”;
a3 “But the BBC are more trustworthythan CNN”;
a4 “However, statistically CNN aremore accurate forecasters thanthe BBC”;
a5 “Basing a comparison on statisticsis more rigorous and rational thanbasing a comparison on yourinstincts about their relativetrustworthiness”.
![Page 89: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/89.jpg)
Value Based AF
Extended AF AFRA
Bipolar AF
![Page 90: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/90.jpg)
afra: argumentation framework with recursive attacks
[Bar+11]
![Page 91: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/91.jpg)
afra: argumentation framework with recursive attacks
a1 There is a last minute offer forGstaad: therefore I should go toGstaad;
a2 There is a last minute offer forCuba: therefore I should go toCuba;
a3 I do like to ski;
a4 The weather report informs that inGstaad there were no snowfallssince one month: therefore it isnot possible to ski in Gstaad;
a5 It is anyway possible to ski inGstaad, thanks to a good amountof artificial snow.
![Page 92: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/92.jpg)
Value Based AF
Extended AF AFRA
Bipolar AF
![Page 93: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/93.jpg)
bipolar argumentation framework
[CL05]
![Page 94: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/94.jpg)
bipolar argumentation framework
..a3. a2. a1.
a4
a1 in favour of m, with premises{s, f, (s ∧ f) → m};
a2 in favour of ¬s, with premises{w,w → ¬s};
a3 in favour of ¬w, with premises{b,b → ¬w};
a4 in favour of f, with premises{l, l → f}
m Mary (who is small) is the killerf the killer is females the killer is smallw a witness says that the killer is tallb the witness is short-sightedl the killer has long hair and wear
lipstick
![Page 95: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/95.jpg)
..structured argumentation frameworks
![Page 96: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/96.jpg)
DeLP ABA
ASPIC+
DeductiveArgumentation
Logic for ClinicalKnowledge
![Page 97: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/97.jpg)
delp: defeasible logic programming
[SL92] [GS14]
![Page 98: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/98.jpg)
delp: defeasible logic programming
Π non-defeasible knowledge ⟨Π,∆⟩ ∆ defeasible knowledge
facts i.e. atomic informationstrict rules Lo ←− L1, . . . , Ln defeasible rules Lo −< L1, . . . , Ln
Def. 40
Let H be a ground literal: ⟨A,H⟩ is an argument structure if:
∙ there exists a defeasible derivation* for H from ⟨Π,A⟩;∙ there are no defeasible derivations from ⟨Π,A⟩ of contradictory literals;∙ and there is no proper subset A′ ⊂ A such that A′ satisfies (1) and (2).
*A defeasible derivation for Q from ⟨Π,∆⟩, is L1, L2, . . . , Ln = Q s.t.: (i) Li is a fact; or (ii) ∃Ri ∈ ⟨Π,∆⟩ withhead Li and body B1, . . . ,Bk, and every literal of the body is an element Lj of the sequence with j < i.
![Page 99: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/99.jpg)
delp: defeasible logic programming
Def. 41
⟨B, S⟩ is a counter-argument for ⟨A,H⟩ at literal P, if there exists a sub-argument ⟨C,P⟩ of⟨A,H⟩ such that P and S disagree, that is, there exist two contradictory literals that have astrict derivation from Π ∪ {S,P}. The literal P is referred as the counter-argument pointand ⟨C,P⟩ as the disagreement sub-argument.
Def. 42
Let ⟨B, S⟩ be a counter-argument for ⟨A,H⟩ at point P, and ⟨C,P⟩ the disagreementsub-argument.
If ⟨B, S⟩ ≻* ⟨C,P⟩, then ⟨B, S⟩ is a proper defeater for ⟨A,H⟩.
If ⟨B, S⟩ ⊁ ⟨C,P⟩ and ⟨C,P⟩ ⊁ ⟨B, S⟩, then ⟨B, S⟩ is a blocking defeater for ⟨A,H⟩.
⟨B, S⟩ is a defeater for ⟨A,H⟩ if ⟨B, S⟩ is either a proper or blocking defeater for ⟨A,H⟩.
*≻ is an argument comparison criterion.
![Page 100: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/100.jpg)
delp: defeasible logic programmingΠ1 ∆1
cloudydry_seasonwavesvacation¬working←− vacation
surf −< nice, spare_timenice −< wavesspare_time −< ¬busy¬busy −< ¬working¬nice −< rainrain −< cloudy¬rain −< dry_season
A0 =
surf −< nice, spare_timenice −< wavesspare_time −< ¬busy¬busy −< ¬working
A1 = {¬nice −< rain; rain −< cloudy}
A2 = {nice −< waves}
A3 = {rain −< cloudy}
A4 = {¬rain −< dry_season}
![Page 101: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/101.jpg)
delp: defeasible logic programming
Π1
cloudydry_seasonwavesvacation¬working←− vacation
A0 =
surf −< nice, spare_timenice −< wavesspare_time −< ¬busy¬busy −< ¬working
A1 = {¬nice −< rain; rain −< cloudy}
A2 = {nice −< waves}
A3 = {rain −< cloudy}
A4 = {¬rain −< dry_season}
![Page 102: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/102.jpg)
DeLP ABA
ASPIC+
DeductiveArgumentation
Logic for ClinicalKnowledge
![Page 103: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/103.jpg)
assumption based argumentation framework
[Bon+97] [Ton14]
![Page 104: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/104.jpg)
assumption based argumentation framework
⟨L,R,A, ⟩L R A ⊆ L : A 7→ L
language set of rules assumptions contrariness
Def. 45
An argument for the claim σ ∈ L supported by A ⊆ A (A ⊢ σ) is a deduction for σsupported by A (and some R ⊆ R).*
Def. 46
An argument A1 ⊢ σ1 attacks an argument A2 ⊢ σ2 iff σ1 is the contrary of one of theassumptions in A2.
*A (finite) tree with nodes labelled by sentences in L or by τ /∈ L, the root labelled by σ , leaves either τor sentences in A, non-leaves σ′ with, as children, the elements of the body of some rule in R with head σ′ .
![Page 105: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/105.jpg)
assumption based argumentation framework
R = { innocent(X)←− notGuilty(X);killer(oj)←− DNAshows(oj),DNAshows(X) ⊃ killer(X);DNAshows(X) ⊃ killer(X)←− DNAfromReliableEvidence(X);evidenceUnreliable(X)←− collected(X, Y), racist(Y);DNAshows(oj)←−;collected(oj,mary)←−;racist(mary)←− }
A = { notGuilty(oj);DNAfromReliableEvidence(oj) }
notGuilty(oj) = killer(oj),
DNAfromReliableEvidence(oj) = evidenceUnreliale(oj).
![Page 106: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/106.jpg)
assumption based argumentation framework
![Page 107: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/107.jpg)
DeLP ABA
ASPIC+
DeductiveArgumentation
Logic for ClinicalKnowledge
![Page 108: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/108.jpg)
aspic+
[Pra10] [MP13] [MP14]
![Page 109: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/109.jpg)
aspic+
Def. 47
An argumentation system is as tuple AS = ⟨L,R, ν⟩ where:
∙ : L 7→ 2L: a contrariness function s.t. if φ ∈ ψ and:∙ ψ /∈ φ, then φ is a contrary of ψ;∙ ψ ∈ φ, then φ is a contradictory of ψ (φ = –ψ);
∙ R = Rd ∪Rs: strict (Rs) and defeasible (Rd) inference rules s.t. Rd ∩Rs = ∅;∙ ν : Rd 7→ L, is a partial function.*
P ⊆ L is consistent iff ∄φ,ψ ∈ P s.t. φ ∈ ψ, otherwise is inconsistent.
A knowledge base in an AS is Kn ∪ Kp = K ⊆ L; {Kn,Kp} is a partition of K; Kn containsaxioms that cannot be attacked; Kp contains ordinary premises that can be attacked.
An argumentation theory is a pair AT = ⟨AS,K⟩.
*Informally, ν(r) is a wff in L which says that the defeasible rule r is applicable.
![Page 110: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/110.jpg)
aspic+
Def. 48
An argument a on the basis of a AT = ⟨AS,K⟩, AS = ⟨L,R, ν⟩ is:
1. φ if φ ∈ K with: Prem(a) = {φ}; Conc(a) = φ; Sub(a) = {φ};Rules(a) = DefRules(a) = ∅; TopRule(a) = undefined.
2. a1, . . . , an −→ / =⇒ ψ if a1, . . . , an, with n ≥ 0, are arguments such that there exists astrict/defeasible rule r = Conc(a1), . . . ,Conc(an) −→ / =⇒ ψ ∈ Rs/Rd.Prem(a) = ∪n
i=1 Prem(ai); Conc(a) = ψ;Sub(a) = ∪n
i=1 Sub(ai) ∪ {a};Rules(a) = ∪n
i=1 Rules(ai) ∪ {r};DefRules(a) = {d | d ∈ Rules(a) ∩Rd};TopRule(a) = r
a is strict if DefRules(a) = ∅, otherwise defeasible; firm if Prem(a) ⊆ Kn, otherwiseplausible.
![Page 111: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/111.jpg)
aspic+
Def. 49
Given a and b arguments, a defeats b iff a undercuts, successfully rebuts or successfullyundermines b, where:
∙ a undercuts b (on b′) iff Conc(a) /∈ ν(r) for some b′ ∈ Sub(b) s.t.r = TopRule(b′) ∈ Rd;
∙ a successfully rebuts b (on b′) iff Conc(a) /∈ φ for some b′ ∈ Sub(b) of the formb′′1 , . . . ,b′′
n =⇒ –φ, and a ⊀ b′;∙ a successfully undermines b (on φ) iff Conc(a) /∈ φ, and φ ∈ Prem(b) ∩ Kp, anda ⊀ φ.
Def. 50
AF is the abstract argumentation framework defined by AT = ⟨AS,K⟩ if A is the smallestset of all finite arguments constructed from K; and→ is the defeat relation on A.
![Page 112: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/112.jpg)
aspic+
Rationality postulates
P1: direct consistency iff{Conc(a) | a ∈ S} isconsistent;
P2: indirect consistency iffCl({Conc(a) | a ∈ S}) isconsistent;
P3: closure iff {Conc(a) | a ∈ S} =Cl({Conc(a) | a ∈ S});
P4: sub-argument closure iff∀a ∈ S, Sub(a) ⊆ S.
∙ close under transposition
If φ1, . . . , φn −→ ψ ∈ Rs , then ∀i = 1 . . . n,φ1, . . . , φi−1,¬ψ,φi+1, . . . , φn =⇒ ¬φi ∈ Rs .
∙ Cl(Kn) is consistent;∙ the argument ordering ⪯ is reasonable, namely:
∙ ∀a,b, if a is strict and firm, and b is plausible ordefeasible, then a ≺ b;
∙ ∀a,b, if b is strict and firm, then b ⊀ a;∙ ∀a, a′,b such that a′ is a strict continuation of{a}, if a ⊀ b then a′ ⊀ b, and if b ⊀ a, thenb ⊀ a′;
∙ given a finite set of arguments {a1, . . . , an}, leta+\i be some strict continuation of{a1, . . . , ai−1, ai+1, . . . , an}. Then it is not the casethat ∀i, a+\i ≺ ai.
![Page 113: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/113.jpg)
aspic+
Kp = { Snores;Professor }
Rd = { Snores =⇒d1 Misbehaves;Misbehaves =⇒d2 AccessDenied;Professor =⇒d3 AccessAllowed }
AccesAllowed = −AccessDenied
Snores <′ Professor;d1 < d2;d1 < d3;d3 < d2.
![Page 114: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/114.jpg)
aspic+
![Page 116: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/116.jpg)
DeLP ABA
ASPIC+
DeductiveArgumentation
Logic for ClinicalKnowledge
![Page 117: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/117.jpg)
deductive argumentation
[BH01] [GH11] [BH14]
![Page 118: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/118.jpg)
deductive argumentation
Def. 53
A deductive argument is an ordered pair ⟨Φ, α⟩ where Φ ⊢i α. Φ is the support, orpremises, or assumptions of the argument, and α is the claim, or conclusion, of theargument.
consistency constraint when Φ is consistent (not essential, cf. paraconsistent logic).
minimality constraint when there is no Ψ ⊂ Φ such that Ψ ⊢ α
Def. 56
If ⟨Φ, α⟩ and ⟨Ψ, β⟩ are arguments, then
∙ ⟨Φ, α⟩ rebuts ⟨Ψ, β⟩ iff α ⊢ ¬β∙ ⟨Φ, α⟩ undercuts ⟨Ψ, β⟩ iff α ⊢ ¬ ∧Ψ
![Page 119: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/119.jpg)
deductive argumentation
Def. 55
A classical logic argument from a set of formulae ∆ is a pair ⟨Φ, α⟩ such that
Φ ⊆ ∆ Φ ⊢ ⊥ Φ ⊢ α there is no Φ′ ⊂ Φ such that Φ′ ⊢ α.
Def. 57
Let a and b be two classical arguments. We define the following types of classical attack.a is a direct undercut of b if ¬Claim(a) ∈ Support(b)
a is a classical defeater of b if Claim(a) ⊢ ¬∧
Support(b)
a is a classical direct defeater of b if ∃ϕ ∈ Support(b) s.t. Claim(a) ⊢ ¬ϕ
a is a classical undercut of b if ∃Ψ ⊆ Support(b) s.t. Claim(a) ≡ ¬∧
Ψ
a is a classical direct undercut of b if ∃ϕ ∈ Support(b) s.t. Claim(a) ≡ ¬ϕ
a is a classical canonical undercut of b if Claim(a) ≡ ¬∧
Support(b).
a is a classical rebuttal of b if Claim(a) ≡ ¬Claim(b).
a is a classical defeating rebuttal of b if Claim(a) ⊢ ¬Claim(b).
![Page 120: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/120.jpg)
deductive argumentation
..
bp(high)
ok(diuretic)
bp(high) ∧ ok(diuretic)→ give(diuretic)
¬ok(diuretic) ∨ ¬ok(betablocker)
give(diuretic) ∧ ¬ok(betablocker)
.
bp(high)
ok(betablocker)
bp(high) ∧ ok(betablocker)→ give(betablocker)
¬ok(diuretic) ∨ ¬ok(betablocker)
give(betablocker) ∧ ¬ok(diuretic)
.
symptom(emphysema),
symptom(emphysema)→ ¬ok(betablocker)
¬ok(betablocker)
...
![Page 121: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/121.jpg)
DeLP ABA
ASPIC+
DeductiveArgumentation
Logic for ClinicalKnowledge
![Page 122: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/122.jpg)
a logic for clinical knowledge
[HW12] [Wil+15]
![Page 123: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/123.jpg)
a logic for clinical knowledge
Def. 58
Given treatments τ1 and τ2, X ⊆ evidence, there are three kinds of inductive argument:
1. ⟨X, τ1 > τ2⟩: evidence in X supports the claim that treatment τ1 is superior to τ2.2. ⟨X, τ1 ∼ τ2⟩: evidence in X supports the claim that treatment τ1 is equivalent to τ23. ⟨X, τ1 < τ2⟩: evidence in X supports the claim that treatment τ1 is inferior to τ2.
Def. 59
If the claim of argument ai is ϵi and the claim of argument aj is ϵj then ai conflicts with ajwhenever:
1. ϵi = τ1 > τ2, and ( ϵj = τ1 ∼ τ2 or ϵj = τ1 < τ2 ).2. ϵi = τ1 ∼ τ2, and ( ϵj = τ1 > τ2 or ϵj = τ1 < τ2 ).3. ϵi = τ1 < τ2, and ( ϵj = τ1 > τ2 or ϵj = τ1 ∼ τ2 ).
![Page 124: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/124.jpg)
a logic for clinical knowledge
Def. 60
For any pair of arguments ai and aj, and a preference relation R, ai attacks aj with respectto R iff ai conflicts with aj and it is not the case that aj is strictly preferred to ai accordingto R.
A domain-specific benefit preference relation is defined in [HW12]
Def. 61 (Meta arguments)
For a ∈ Arg(evidence), if there is an e ∈ support(a) such that:
∙ e is not statistically significant, and e is not a side-effect, then this is an attacker:⟨Not statistically significant⟩;
∙ e is a non-randomised and non-blind trial, then this is an attacker:⟨Non-randomized & non-blind trials⟩;
∙ e is a meta-analysis that concerns a narrow patient group, then this is an attacker:⟨Meta-analysis for a narrow patient group⟩.
![Page 125: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/125.jpg)
a logic for clinical knowledge
ID Left Right Indicator Risk ratio Outcome pe1 CP* NT† Pregnancy 0.05 superior 0.01e2 CP NT Ovarian cancer 0.99 superior 0.07e3 CP NT Breast cancer 1.04 inferior 0.01e4 CP NT DVT 1.02 inferior 0.05
N.B.: Fictional data.
*Contraceptive pill.†No Treatment.
![Page 126: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/126.jpg)
a logic for clinical knowledge
..
⟨{e1}, CP > NT⟩
.
⟨{e2}, CP > NT⟩
. ⟨{e1, e2}, CP > NT⟩.
⟨{e3}, CP < NT⟩
.
⟨{e4}, CP < NT⟩
. ⟨{e3, e4}, CP < NT⟩.⟨Notstatistically
significant⟩
ID Left Right Indicator Risk ratio Outcome pe1 CP NT Pregnancy 0.05 superior 0.01e2 CP NT Ovarian cancer 0.99 superior 0.07e3 CP NT Breast cancer 1.04 inferior 0.01e4 CP NT DVT 1.02 inferior 0.05
![Page 127: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/127.jpg)
..probabilistic argumentation frameworks
![Page 128: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/128.jpg)
epistemic approach
[Thi12] [Hun13]
![Page 129: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/129.jpg)
epistemic approach
[HT14] [BGV14]
![Page 130: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/130.jpg)
epistemic approach
An epistemic probability distribution* for an argumentation framework ∆ = ⟨A,→ ⟩ is:
P : A → [0, 1]
Def. 65
For an argumentation framework AF = ⟨A,→⟩ and a probability assignment P, theepistemic extension is
{a ∈ A | P(a) > 0.5}
*In the tutorial a way to compute it for arguments based on classical deduction.
![Page 131: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/131.jpg)
epistemic approach
COH: P is coherent if for every a,b ∈ A, if a attacks b then P(a) ≤ 1− P(b).
SFOU: P is semi-founded if P(a) ≥ 0.5 for every unattacked a ∈ A.
FOU: P is founded if P(a) = 1 for every unattacked a ∈ A.
SOPT: P is semi-optimistic if P(a) ≥ 1−∑
b∈a− P(b) for every a ∈ A with at least one attacker.
OPT: P is optimistic if P(a) ≥ 1−∑
b∈a− P(b) for every a ∈ A.
JUS: P is justifiableif P is coherent and optimistic.
TER: P is ternary if P(a) ∈ {0, 0.5, 1} for every a ∈ A.
RAT: P is rational if for every a,b ∈ A, if a attacks b then P(a) > 0.5 implies P(b) ≤ 0.5.
NEU: P is neutral if P(a) = 0.5 for every a ∈ A.
INV: P is involutary if for every a,b ∈ A, if a attacks b, then P(a) = 1− P(b).
Let the event “a is accepted” be denoted as a, and let be Eac(S) = {a|a ∈ S}. Then P is weaklyp-justifiable iff ∀a ∈ A, ∀b ∈ a−, P(a) ≤ 1− P(b).
![Page 132: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/132.jpg)
epistemic approach
Def. 67
Restriction on complete* probability function P Classical semanticsNo restriction complete extensions
No arguments a such that P(a) = 0.5 stableMaximal no. of a such that P(a) = 1 preferredMaximal no. of a such that P(a) = 0 preferredMaximal no. of a such that P(a) = 0.5 groundedMinimal no. of a such that P(a) = 1 groundedMinimal no. of a such that P(a) = 0 groundedMinimal no. of a such that P(a) = 0.5 semi-stable
*Coherent, founded, and ternary. http://arxiv.org/abs/1405.3376
![Page 133: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/133.jpg)
structural approach
[Hun14]
![Page 134: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/134.jpg)
structural approach
P : {∆′ ⊑ ∆} 7→ [0, 1]
Subframework Probability∆1 a ↔ b 0.09∆2 a 0.81∆3 b 0.01∆4 0.09
PGR({a,b}) = = 0.00PGR({a}) = P(∆2) = 0.81PGR({b}) = P(∆3) = 0.01PGR({}) = P(∆1) + P(∆4) = 0.18
![Page 135: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/135.jpg)
a computational framework
[Li15]
![Page 136: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/136.jpg)
a computational framework
Convert to
ASPIC+ ArgumentationSystem
•Logical Language•Inference Rules•Contrariness Function•......
StructuredArgumentation
Framework(SAF)
DAF
DAFEAF
ExtendedEvidential
Framework(EEAF)
ProbabilisticExtendedEvidential
Framework
Convert to
Convert to
ExtendedEvidential
Framework(EEAF)
Model
ProbabilisticExtendedEvidential
Framework
Associate
Probabilities
Convert toPrEAF
Associate
Probabilities
Sem
anticsP
reserved
PrAF
Associate
Probabilities
![Page 137: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/137.jpg)
..cispaces
![Page 138: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/138.jpg)
[Ton+15]
![Page 140: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/140.jpg)
What is the cause of the illness?
![Page 141: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/141.jpg)
Analyst Joe
?
Illness among peopleLivestock illnessPossible
Connection
CONTAMINATED WATER SUPPLY
Is this information credible?
Are there alternative explanations?
Is there evidence for the contamination of the water supply?
Analysts must reason with different types of evidence
![Page 142: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/142.jpg)
research foci
Attributes of the problem domain
∙ Intelligence analysis is critical for making well-informed decisions∙ Large amount of conflicting incomplete information∙ Reasoning with different types of evidence
Research Question
How to develop agents to support to reasoning with different types of evidence in acombined approach throughout the process of analysis?
![Page 143: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/143.jpg)
intelligence analysis
Def. 73
The application of individual and collective cognitive methods to evaluate, integrate,interpret information about situations and events to provide warning for potentialthreats or identify opportunities.
External Data
Sources
Presentation
Searchand Filter
Schematize
Build Case
Tell Story
Reevaluate
Search for support
Search for evidence
Search for information
FORAGING LOOP
SENSE-MAKING LOOP
Stru
ctur
e
Effort
inf
Shoebox
Ev
Ev
EvEv Ev
EvEv
Ev
Ev
Ev
Ev
Evidence File
Hyp1 Hyp2
Hypotheses
Pirolli & Card Model
Effective if:TIMELYTARGETEDand TRUSTED
![Page 144: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/144.jpg)
collaboration among analysts
Team of Analysts: More effective, Prevent Bias, Different Expertise and Resources
Challenges atdifferent stagesof analysis
Schematize Build Case
Search for support
Search for evidence
Shoebox Evidence File
Hyp1 Hyp2
Hypotheses
Share data and analysis
Integrate and annotate
Assess credibility
inf
inf
inf
Gather information Identify Plausible Hypotheses
Mitigate Cognitive Biases
![Page 145: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/145.jpg)
cispaces agent support
Interface
Communication layer
ToolBoxWorkBoxInfoBox ReqBox
ChatBox
Sensemaking Agent
Crowd-sourcingAgent
ProvenanceAgent
analyst
CISpaces Interface: Working space and access to agent support
Sensemaking Agent: Support collaborative analysis of argumentsCrowd-sourcing Agent: Enable participation of large groups of contributorsProvenance Agent: Assess the credibility of information
![Page 146: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/146.jpg)
Interface
Communication layer
ToolBoxWorkBoxInfoBox ReqBox
ChatBox
Sensemaking Agent
Crowd-sourcingAgent
ProvenanceAgent
analyst
ProvenanceAgent
Sensemaking Agent
![Page 147: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/147.jpg)
sensemaking agent (smag) - analysis construction
∙ Annotation of Pro links;∙ Suggests CQs (Con links) to prevent cognitive biases.
Causal – Distribution of Activities
∙ Typically, if C occurs, then Ewill occur
∙ In this case, C occurs⇒ Therefore, in this case E will
occur
Association – Element Connections∙ An activity occurs, and an entity may beinvolved
∙ To perform the activity some property His required
∙ The entity fits the property H⇒ Therefore, the entity is associated with
the activity
![Page 148: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/148.jpg)
smag analysis construction (cont.)
E is an expert in D
E asserts A in D
A is trueLab Expert on
water toxins and chemicals asserts
There is a bacteria contaminating the
water supply
Water supply in Kish is
contaminated Pro
Expert Opinion Cause to effectIdentification...
V
Analyst annotates pro links andnodes → match to an argumentscheme.
E is an expert in DLab Expert on
water toxins and chemicals asserts
There is a bacteria contaminating the
water supply
Water supply in Kish is
contaminated
Expert OpinionE asserts A in D
A is true∙ E is an expert in domain D containing A,
∙ E asserts that A is true,
⇒ Therefore, A may plausibly be true.
E is an expert in DLab Expert on
water toxins and chemicals asserts
CQ1: Is E an expert in D?CQ2: Is E reliable?V
ConCQ2
The expert is not reliable
Analyst select CQs. CISpaces showsa negative answer to a CQ to preventcognitive biases.
![Page 149: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/149.jpg)
smag hypotheses identification
controversial standpoints as extensions
1. Transforming the current workbox view into an argumentation framework
q0,q1=>q2
PREMISE PREMISE
q0q1
q2
q3
q4
q1,q3=>q4
Contradictory statements:
q2-q4
CAUSE TO EFFECT
ASPIC+ argumentation framework:Premises: q0, q1, q3Rules: q0, q1 ⇒ q2; q1, q3 ⇒ q4 ;Negation: q2 − q4Arguments:A0 : q0 , A1 : q1 , A2 : q3A4 : A0, A1 ⇒ q2
A5 : A0, A2 ⇒ q4
2. ASPIC+/Dung’s AF implementation identifies the sets of acceptable arguments:
![Page 150: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/150.jpg)
smag hypotheses identification (cont.)
3. CISpaces shows what conclusions can be supported∙ Labelled according to extensions computed
∙ Arguments shared through the Argument Interchange Format (AIF)
Unidentified illness affects
the local livestock in
Kish
Non-waterborne
bacteria were engineered and released in the water supply
Illness among young and elderly
people in Kish caused by bacteria
Toxic Bacteria
contaminate the local water system in Kish
NGO Lab reports
examined the contamination
NON-waterborne
bacteria contaminate the
water supply
There are bacteria in the water supply
Waterborne bacteria
contaminate the water
supply
V
V
V
V
V Waterborne bacteria have formed by a
sewage leakage in the water
supply pipes
V
![Page 151: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/151.jpg)
Interface
Communication layer
ToolBoxWorkBoxInfoBox ReqBox
ChatBox
Sensemaking Agent
Crowd-sourcingAgent
ProvenanceAgent
analyst
ProvenanceAgent
Crowd-sourcingAgent
![Page 152: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/152.jpg)
crowdsourcing agent
1. Critical questions trigger the need for further information on a topic2. Analyst call the crowdsourcing agent (CWSAg)3. CWSAg distributes the query to a large group of contributors4. CWSAg aggregates the results and shows statistics to the analyst
![Page 153: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/153.jpg)
cwsag results import
Q0-AnswerClear (Con)
Q1-Answer21.1 (Pro)
Q0-AGAINSTWater Contaminated
Q1-FORWater Contaminated
CONTRADICTORY
![Page 154: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/154.jpg)
Interface
Communication layer
ToolBoxWorkBoxInfoBox ReqBox
ChatBox
Sensemaking Agent
Crowd-sourcingAgent
ProvenanceAgent
analyst
ProvenanceAgent
ProvenanceAgent
![Page 155: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/155.jpg)
N
S E
W
image info ij
observation
Observer Messenger Informer
message
info ik
Gangheading South
GangCrossing
North Border
N
S E
W
Surveillance
BORDER L1-L2
Image Processing
Analyst Joe
BORDER L1-L2
GP(ij)
GP(ik)
![Page 156: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/156.jpg)
argument from provenance
- Given a provenance chain GP(ij) of ij, information ij:- (Where?) was derived from an entity A- (Who?) was associated with actor AG- (What?) was generated by activity P1- (How?) was informed by activity P2- (Why?) was generated to satisfy goal X- (When?) was generated at time T- (Which?) was generated by using some entities A1,…, AN- where A, AG, P1, …belong to GP(ij)
- the stated elements of GP(ij) infer that information ij is true,⇒ Therefore, information ij may plausibly be taken to be true.
CQA1: Is ij consistent with other information?
CQA2: Is ij supported by evidence?
CQA3: Does GP(ij) contain other elements that lead us not to believe ij?
CQA4: Are there provenance elements that should have been included for believing ij?
![Page 157: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/157.jpg)
argument for provenance preference
- Given information ij and ik,- and their known parts of the provenance chains GP(ij) and GP(ik),- if there exists a criterion Ctr such that GP(ij)≪Ctr GP(ik), then ij ≪ ik- a criterion Ctr′ leads to assert that GP(ij)≪Ctr′ GP(ik)⇒ Therefore, ik should be preferred to ij.
Trustworthiness Reliability Timeliness Shortest path
CQB1: Does a different criterion Ctr1 , such that GP(ij)≫Ctr1 GP(ik) lead ij ≪ ik not being valid?
CQB2: Is there any exception to criterion Ctr such that even if a provenance chain GP(ik) is preferred toGP(ij), information ik is not preferred to information ij?
CQB3: Is there any other reason for believing that the preference ij ≪ ik is not valid?
![Page 158: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/158.jpg)
pvag provenance analysis & import
IMPORT ANALYSISPrimary Source Pattern
Provenance Explanation
US Patrol Report
Extract
Used wasGeneratedBy
US Team Patrol
wasAssociatedWith
wasDerivedFromINFO:
Livestock illness
prov: time 2015-04-27T02:27:40Z
Farm Daily Report
Prepare
Used wasGeneratedBy
KishFarmer
wasAssociatedWith
wasDerivedFrom
type PrimarySource
Annotate
wasGeneratedBy
wasAssociatedWith
Livestock Pictures
Used
Livestock Information
IMPORT OF PREFERENCES?
![Page 159: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/159.jpg)
theories/technologies integrated
∙ Argument representation:∙ Argument Schemes and Critical questions (domain specific)∙ „Bipolar-like” graph for user consumption∙ AIF (extension for provenance)∙ ASPIC(+)∙ Arguments based on preferences (partially under development)
∙ Theoretical framework for acceptability status:∙ AF∙ PrAF (case study for [Li15])∙ AFRA for preference handling (under development)
∙ Computational machinery: jArgSemSAT
![Page 160: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/160.jpg)
..algorithms and implementations
![Page 161: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/161.jpg)
[Cha+15]
![Page 162: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/162.jpg)
ad-hoc procedures
ArgTools
[NAD14]
![Page 163: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/163.jpg)
ad-hoc procedures
![Page 164: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/164.jpg)
csp-based approach
ConArg
[BS12]
![Page 165: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/165.jpg)
csp-based approach
A Constraint Satisfaction Problem (CSP) P is a triple P = ⟨X,D, C⟩ such that:
∙ X = ⟨x1, . . . , xn⟩ is a tuple of variables;∙ D = ⟨D1, . . . ,Dn⟩ a tuple of domains such that ∀i, xi ∈ Di;∙ C = ⟨C1, . . . , Ct⟩ is a tuple of constraints, where ∀j, Cj = ⟨RSj , Sj⟩,Sj ⊆ {xi|xi is a variable}, RSj ⊆ SDj × SDj where SDj = {Di|Di is adomain, and xi ∈ Sj}.
A solution to the CSP P is A = ⟨a1, . . . , an⟩ where ∀i, ai ∈ Di and ∀j,RSj holds on theprojection of A onto the scope Sj. If the set of solutions is empty, the CSP is unsatisfiable.
![Page 166: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/166.jpg)
csp-based approach
Given an AF:
1. create a variable for each argument whose domain is always {0, 1} — ∀ai ∈ A, ∃xi ∈ Xsuch that Di = {0, 1};
2. describe constraints associated to different definitions of Dung’s argumentationframework: e.g.
{a1, a2} ⊆ A is D-conflict-free iff ¬(x1 = 1 ∧ x2 = 1);3. solve the CSP problem.
![Page 167: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/167.jpg)
asp-based approach
ASPARTIX-D / ASPARTIX-V / DIAMOND
[EGW10] [Dvo+11]
![Page 168: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/168.jpg)
asp-based approach
πST = { in(X)← not out(X), arg(X);out(X)← not in(X), arg(X);← in(X), in(Y), defeat(X, Y);defeated(X)← in(Y),defeat(Y, X);← out(X),not defeated(X)}.
Tests for subset-maximality exploit the metasp optimisation frontend for theASP-package gringo/claspD.
![Page 169: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/169.jpg)
sat-based approaches
Cegartix
[Dvo+12]
ArgSemSAT/jArgSemSAT/LabSATSolver
[Cer+14b]
![Page 170: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/170.jpg)
sat-based approaches
[Dvo+12]
∧a→b
(¬xa ∨ ¬xb)∧
∧b→c
¬xc ∨ ∨a→b
xa
![Page 171: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/171.jpg)
sat-based approaches
[Cer+14b]
If a1 is not attacked, Lab(a1) = in
C1Lab(a1) = in⇔ ∀a2 ∈ a−1 Lab(a2) = outLab(a1) = out⇔ ∃a2 ∈ a−1 : Lab(a2) = in
Lab(a1) = undec⇔ ∀a2 ∈ a−1 Lab(a2) = in ∧ ∃a3 ∈ a−1 :
Lab(a3) = undec
![Page 172: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/172.jpg)
sat-based approaches
[Cer+14b]
∧i∈{1,...,k}
((Ii ∨ Oi ∨ Ui) ∧ (¬Ii ∨ ¬Oi)∧(¬Ii ∨ ¬Ui) ∧ (¬Oi ∨ ¬Ui)
)∧
∧{i|ϕ(i)−=∅}
(Ii ∧ ¬Oi ∧ ¬Ui) ∧∧
{i|ϕ(i)−=∅}
Ii ∨
∨{j|ϕ(j)→ϕ(i)}
(¬Oj)
∧
∧{i|ϕ(i)−=∅}
∧{j|ϕ(j)→ϕ(i)}
¬Ii ∨ Oj
∧∧
{i|ϕ(i)−=∅}
∧{j|ϕ(j)→ϕ(i)}
¬Ij ∨ Oi
∧
∧{i|ϕ(i)−=∅}
¬Oi ∨
∨{j|ϕ(j)→ϕ(i)}
Ij
∧
∧{i|ϕ(i)−=∅}
∧{k|ϕ(k)→ϕ(i)}
Ui ∨ ¬Uk ∨
∨{j|ϕ(j)→ϕ(i)}
Ij
∧
∧{i|ϕ(i)−=∅}
∧{j|ϕ(j)→ϕ(i)}
(¬Ui ∨ ¬Ij)
∧
¬Ui ∨
∨{j|ϕ(j)→ϕ(i)}
Uj
![Page 173: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/173.jpg)
sat-based approaches
[Cer+14b]
If a1 is not attacked, Lab(a1) = in
Ca1Lab(a1) = in⇔ ∀a2 ∈ a−1 Lab(a2) = outLab(a1) = out⇔ ∃a2 ∈ a−1 : Lab(a2) = in
![Page 174: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/174.jpg)
sat-based approaches
[Cer+14b]
If a1 is not attacked, Lab(a1) = in
Cb1
Lab(a1) = out⇔ ∃a2 ∈ a−1 : Lab(a2) = inLab(a1) = undec⇔ ∀a2 ∈ a−1 Lab(a2) = in ∧ ∃a3 ∈ a−1 :
Lab(a3) = undec
![Page 175: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/175.jpg)
sat-based approaches
[Cer+14b]
If a1 is not attacked, Lab(a1) = in
Cc1Lab(a1) = in⇔ ∀a2 ∈ a−1 Lab(a2) = out
Lab(a1) = undec⇔ ∀a2 ∈ a−1 Lab(a2) = in ∧ ∃a3 ∈ a−1 :
Lab(a3) = undec
![Page 176: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/176.jpg)
sat-based approaches
[Cer+14b]
If a1 is not attacked, Lab(a1) = in
C2Lab(a1) = in⇒ ∀a2 ∈ a−1 Lab(a2) = outLab(a1) = out⇒ ∃a2 ∈ a−1 : Lab(a2) = in
Lab(a1) = undec⇒ ∀a2 ∈ a−1 Lab(a2) = in ∧ ∃a3 ∈ a−1 :
Lab(a3) = undec
![Page 177: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/177.jpg)
sat-based approaches
[Cer+14b]
If a1 is not attacked, Lab(a1) = in
C3Lab(a1) = in⇐ ∀a2 ∈ a−1 Lab(a2) = outLab(a1) = out⇐ ∃a2 ∈ a−1 : Lab(a2) = in
Lab(a1) = undec⇐ ∀a2 ∈ a−1 Lab(a2) = in ∧ ∃a3 ∈ a−1 :
Lab(a3) = undec
![Page 178: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/178.jpg)
sat-based approaches
[Cer+14b]
50
60
70
80
90
100
50 100 150 200IP
Cn
orm
alis
edto
10
0
Number of arguments
IPC normalised to 100 with respect to the number of arguments
C1
Ca1
Cb1
Cc1
C2
C3
![Page 179: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/179.jpg)
iccma 2015
The First International Competition on Computational Models of Argumentation
http://argumentationcompetition.org/
Results announced yesterday
![Page 180: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/180.jpg)
iccma 2015
![Page 181: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/181.jpg)
iccma 2015
![Page 182: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/182.jpg)
iccma 2015
![Page 183: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/183.jpg)
a parallel algorithm
[BGG05] [Cer+14a] [Cer+15]
![Page 184: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/184.jpg)
a parallel algorithm
![Page 185: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/185.jpg)
a parallel algorithm
![Page 186: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/186.jpg)
a parallel algorithm
a b
e f
c d g h
![Page 187: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/187.jpg)
a parallel algorithm
a b
e f
c d g h
![Page 188: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/188.jpg)
a parallel algorithm
a b
e f
c d g h
Level 1 Level 2
![Page 189: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/189.jpg)
a parallel algorithm
a b
e f
c d g h
Level 1 Level 2
⟨{a, c, e}, {b,d, f}, {}⟩,⟨{a, c, f}, {b,d, e}, {}⟩,⟨{a,d, e}, {b, c, f}, {}⟩,⟨{a,d, f}, {b, c, e}, {}⟩
![Page 190: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/190.jpg)
a parallel algorithm
a b
e f
c d g h
Level 1 Level 2
Moving to the the last level,
B1: no argument in S3 is attacked from “outside” for Lab ∈{⟨{a, c, e}, {b,d, f}, {}⟩,⟨{a, c, f}, {b,d, e}, {}⟩
}
B2: g is attacked by d Lab ∈{⟨{a,d, e}, {b, c, f}, {}⟩,⟨{a,d, f}, {b, c, e}, {}⟩
}
Cases B1 and B2 are computed in parallel.
![Page 191: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/191.jpg)
a parallel algorithm
a b
e f
c d g h
Level 1 Level 2
⟨{a, c, e, g}, {b,d, f,h}, {}⟩,⟨{a, c, e,h}, {b,d, f, g}, {}⟩,⟨{a, c, f, g}, {b,d, e,h}, {}⟩,⟨{a, c, f,h}, {b,d, e, g}, {}⟩,⟨{a,d, e,h}, {b, c, f, g}, {}⟩,⟨{a,d, f,h}, {b, c, e, g}, {}⟩
![Page 192: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/192.jpg)
We need to be smart
Holger H. Hoos, Invited Keynote Talk at ECAI2014
![Page 193: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/193.jpg)
[VCG14] [CGV14]
![Page 194: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/194.jpg)
features from an argumentation graph
Directed Graph (26 features)
Structure:
# vertices ( |A| )# edges ( | → | )# vertices / #edges ( |A|/| → | )# edges / #vertices ( | → |/|A| )densityaverage
Degree: stdevattackers max
min#averagestdevmax
SCCs:
min
Structure:
# self-def# unattackedflow hierarchyEulerianaperiodic
CPU-time: …
Undirected Graph (24 features)
Structure:
# edges# vertices / #edges# edges / #verticesdensity
Degree:
averagestdevmaxmin
SCCs:
#averagestdevmaxmin
Structure: Transitivity
3-cycles:
#averagestdevmaxmin
CPU-time: …
![Page 195: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/195.jpg)
how hard is to get the features?
Direct Graph Features (DG) Undirect Graph Features (UG)Class CPU-Time # feat Class CPU-Time # feat
Mean stdDev Mean stDevGraph Size 0.001 0.009 5 Graph Size 0.001 0.003 4Degree 0.003 0.009 4 Degree 0.002 0.004 4SCC 0.046 0.036 5 Components 0.011 0.009 5Structure 2.304 2.868 5 Structure 0.799 0.684 1
Triangles 0.787 0.671 5
Average CPU-time, stdev, needed for extracting the features of a given class.
![Page 196: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/196.jpg)
protocol: some numbers
∙ |SCCS∆| in 1 : 100;∙ |A| in 10 : 5, 000;∙ | → | in 25 : 270, 000 (Erdös-Rényi, p uniformly distributed) ;∙ Overall 10, 000 AFs.
∙ Cutoff time of 900 seconds (value also for crashed, timed-out or ran out of memory).
∙ EPMs both for Regression (Random forests) and Classification (M5-Rules) using WEKA;∙ Evaluation using a 10-fold cross-validation approach on a uniform randompermutation of instances.
![Page 197: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/197.jpg)
result 1: best features for prediction
Solver B1 B2 B3AspartixM number of arguments density of directed graph size of max. SCCPrefSAT density of directed graph number of SCCs aperiodicityNAD-Alg density of directed graph CPU-time for density CPU-time for EulerianSSCp density of directed graph number of SCCs size of the max SCC
Determined by a greedy forward search based on the Correlation-based FeatureSelection (CFS) attribute evaluator.
AF structure SCCs CPU-time for feature extraction
![Page 198: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/198.jpg)
result 2: predicting (log)runtime
RSME of Regression (Lower is better)B1 B2 B3 DG UG SCC All
AspartixM 0.66 0.49 0.49 0.48 0.49 0.52 0.48PrefSAT 1.39 0.93 0.93 0.89 0.92 0.94 0.89NAD-Alg 1.48 1.47 1.47 0.77 0.57 1.61 0.55SSCp 1.36 0.80 0.78 0.75 0.75 0.79 0.74
√√√√∑ni=1
(log10( ti )− log10( yi )
)2
n
AF structure SCCs CPU-time for feature extraction Undirect Graph
![Page 199: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/199.jpg)
result 3: best features for classification
C-B1 C-B2 C-B3number of arguments density of directed graph min attackers
Determined by a greedy forward search based on the Correlation-based FeatureSelection (CFS) attribute evaluator.
AF structure Attackers
![Page 200: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/200.jpg)
result 4: classification, i.e. selecting the best solver for a given af
Classification (Higher is better)|A| density min attackers DG UG SCC All
Accuracy 48.5% 70.1% 69.9% 78.9% 79.0% 55.3% 79.5%Prec. AspartixM 35.0% 64.6% 63.7% 74.5% 74.9% 42.2% 76.1%Prec. PrefSAT 53.7% 67.8% 68.1% 79.6% 80.5% 60.4% 80.1%Prec. NAD-Alg 26.5% 69.2% 69.0% 81.7% 85.1% 35.3% 86.0%Prec. SSCp 54.3% 73.0% 72.7% 76.6% 76.8% 57.8% 77.2%
AF structure Attackers Undirect Graph SCCs
![Page 201: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/201.jpg)
result 5: algorithm selection
Metric: Fastest(max. 1007)
AspartixM 106NAD-Alg 170PrefSAT 278SSCp 453EPMs Regression 755EPMs Classification 788
Metric: IPC*(max. 1007)
NAD-Alg 210.1AspartixM 288.3PrefSAT 546.7SSCp 662.4EPMs Regression 887.7EPMs Classification 928.1
*Scale of (log)relative performance
![Page 202: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/202.jpg)
..the frontier
![Page 203: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/203.jpg)
belief revision and argumentation
[FKS09] [FGS13]
![Page 204: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/204.jpg)
belief revision and argumentation
Potential cross-fertilisation
Argumentation in Belief Revision
∙ Justification-based truth maintenancesystem
∙ Assumption-based truth maintenancesystem
Some conceptual differences:in revision, external beliefs are
compared with internal beliefs and,after a selection process, somesentences are discarded, otherones are accepted. [FKS09]
Belief Revision in Argumentation
∙ Changing by adding or deleting anargument.
∙ Changing by adding or deleting a set ofarguments.
∙ Changing the attack (and/or defeat)relation among arguments.
∙ Changing the status of beliefs (asconclusions of arguments).
∙ Changing the type of an argument (fromstrict to defeasible, or vice versa).
![Page 205: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/205.jpg)
abstract dialectical framework
[Bre+13]
![Page 206: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/206.jpg)
abstract dialectical framework
Dependency Graph + Acceptance Conditions
![Page 207: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/207.jpg)
argumentation and social networks
[LM11] [ET13]
![Page 208: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/208.jpg)
argumentation and social networks
a:The Wonder-Phone is the best new generation phone.+20 -20
b: No, the Magic-Phone is the best new generation phone.+ 20 - 20
c: here is a [link] to a review of the Magic-Phone giving poor scores due to bad battery performance+60 -10.
d: author of c is ignorant, since subsequent reviews noted that only one of the first editions had such problems: [links].+10 -40
e: d is wrong. I found out c) knows about that but withheld the information. Here's a [link] to another thread proving it!+40 -10
![Page 209: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/209.jpg)
argumentation and social networks
a:The Wonder-Phone is the best new generation phone.+20 -20 b: No, the Magic-Phone is the
best new generation phone.+ 20 - 20
c: here is a [link] to a review of the Magic-Phone giving poor scores due to bad battery performance+60 -10.
d: author of c is ignorant, since subsequent reviews noted that only one of the first editions had such problems: [links].+10 -40
e: d is wrong. I found out c) knows about that but withheld the information. Here's a [link] to another thread proving it!+40 -10
![Page 210: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/210.jpg)
argumentation and social networks
a:The Wonder-Phone is the best new generation phone.+20 -20
b: No, the Magic-Phone is the best new generation phone.+ 20 - 20
c: here is a [link] to a review of the Magic-Phone giving poor scores due to bad battery performance+60 -10.
d: author of c is ignorant, since subsequent reviews noted that only one of the first editions had such problems: [links].+10 -40
e: d is wrong. I found out c) knows about that but withheld the information. Here's a [link] to another thread proving it!+40 -10
![Page 212: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/212.jpg)
argument mining
[CV12] [Bud+14]
![Page 213: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/213.jpg)
argument mining
http://www-sop.inria.fr/NoDE/
http://corpora.aifdb.org/
![Page 214: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/214.jpg)
natural language interfaces
[CTO14] [Cam+14]
![Page 215: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/215.jpg)
natural language interfaces
a1 : σA ⇒ γ
a2 : σB ⇒ ¬γa3 : ⇒ a1 ≺ a2
First Scenarioa1: Alice suggests to move in together with Janea2: Stacy suggests otherwise because Jane might have a hidden agendaa3: Stacy is your best friend
a1 a2 don’t know% agreement 12.5 68.8 18.8
• • • • •
Second Scenarioa1: TV1 suggests that tomorrow will raina2: TV2 suggests that tomorrow will be cloudy but will not raina3: TV2 is generally more accurate than TV1
a1 a2 don’t know% agreement 5.0 50.0 45.0
![Page 216: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/216.jpg)
natural language interfaces
Scrutable Autonomous Systems (in particular from 7’ 30”)
![Page 217: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/217.jpg)
..conclusion
![Page 219: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/219.jpg)
..credits
![Page 220: Argumentation in Artificial Intelligence](https://reader034.fdocuments.in/reader034/viewer/2022051516/55d28ee9bb61ebb6698b466d/html5/thumbnails/220.jpg)
credits
Template
adapted from mtheme https://github.com/matze/mtheme