Post on 06-Jan-2016
description
WP7: Empirical StudiesPresenters: Paolo Besana, Nardine Osman, Dave Robertson
Outline of This Talk
• Introduce overall framework
• Identify four key areas:– Interaction availability– Consistency interaction-peer– Consistency peer-peer– Consistency with environment
In each of these areas it is impossible to guarantee the general property we ideally would require, so the goal of analysis is to identify viable engineering compromises and explore how they scale.
Basic Conceptual Framework
P M(P,R)
P1
Pn
EP
EP1
EPn
P = process nameR = role of PM(P,R) = Interaction model for P in role REP = environment of P
Simulation as Clause Rewriting
Ensuring Interactions are Available
P M(P,R)
P1
Pn
EP
EP1
EPn
MP
RR(P) → ◊(M(P,R)M(P) (i(M(P,R)) → ◊a(M(P,R))))
R(P) = Roles P wants to undertakeMP = Interactions known to P {M(P,R) , …}i(M(P,R)) = M(P,R) is initiateda(M(P,R)))) = M(P,R) is completed successfully
Specific Question
• Suppose that the same interaction patterns are being used repeatedly in overlapping peer groups.
• To what extent can basic statistical information about success/failure of interaction models solve matchmaking problems?
See Deliverable 7.1 for discussion of this
Consistency Peer - Interaction Model
P M(P,R)
P1
Pn
EP
EP1
EPn
K(P) K(M(P,R))
AK(P) (BK(M(P,R)) ◊BK(M(P,R))) → (A B)
K(X) = Knowledge derivable from X(F) = F is consistent
Specific Question
• Each interaction model imposes temporal constraints
• Peers have deontic constraints
• What sorts of properties required by peers (e.g. trust properties) or by interaction modellers (e.g. fairness properties) can we test using this information alone.
ExampleIn an auction, the auctioneer agent wants an
interaction protocol that enforces truth telling
on the bidders’ side.
A = [bid(bidder,V)⇒win(bidder,PV)] ⋀ [bid(bidder,B)⇒win(bidder,PB) ⋀ B≠V] ⋀ PB≮PV
where A∈K(P)
We would like to verify:
A∈K(P) ∧(B∈K(M(P,R))∨◊B∈K(M(P,R))) →σ(A∧B)
1
2
3
4
M(P,R)
Verifying σ(A∧B)
Verify M(P,R) satisfies A:
Is A satisfied at state 1?
If result is achieved,
then terminate else, go to next state(s)
and repeat
1
2
3
4
M(P,R)
1
2
3
4
M(P,R)
1
2…
…
Property Checking Framework
interactionstate-space
temporalproperties
deonticconstraints
Mo
del
Ch
ecke
rX
SB
sys
tem
TablePrologengine
Temporal Proof Rules
LCC Transition Rules
satisfies(E,tt) true
satisfies(E,Φ1⋀Φ2) satisfies(E,Φ1) satisfies(E,⋀ Φ2)
satisfies(E,Φ1⋁Φ2) satisfies(E,Φ1) satisfies(E,⋁ Φ2)
satisfies(E,<A>Φ) ∃ F. trans(E,A,F) ⋀ satisfies(F,Φ)
satisfies(E,[A]Φ) ∀F. trans(E,A,F) ⋀ satisfies(F,Φ)
satisfies(E,μZ.Φ) satisfies(E,Φ)
satisfies(E,νZ.Φ) dual(Φ,Φ’) ⋀ ¬satisfies(E,Φ’)
Temporal Proof Rules
trans(E::D,A,F) trans(D,A,F)
trans(E1 or E2,A,F) trans(E1,A,F) trans(E⋁ 2,A,F)
trans(E1 then E2,A,E2) trans(E1,A,nil)
trans(E1 then E2,A,F then E2) trans(E1,A,F) ⋀ F ≠ nil
trans(E1 par E2,A,F par E2) trans(E1,A,F)
trans(E1 par E2,A,E1 par F) trans(E2,A,F)
trans(M⇐P,in(M),null) true
trans(M⇒P,out(M),null) true
trans(E←C,#(X),E) X in C sat(X) sat(C) ⋀ ⋀
trans(E←C,A,F) (A ≠ #) sat(C) trans(E,A,F)⋀ ⋀
LCC Transition Rules
Consistency Peer - Peer
P M(P,R)
P1
Pn
EP
EP1
EPn
K(P) K(P1)
AK(P) PiP(M(P,R)) BK(Pi) → (A B)
P(M(P,R)) = Peers involved in M(P,R)
Specific Question
• Agents in open environments may have different ontologies
• Guaranteeing complete mappings between them is infeasible (ontologies can be inconsistent, can cover different domains, etc)
• Agents are interested in performing tasks: mapping is required only for the terms contextual to the interactions
• Repetition of tasks provides the basis for modelling statistically the contexts of the interactions
• To what extent can interaction models can be used to focus the ontology mapping to the relevant sections of the ontology?
Approach
• Predicting the possible content of a message before processing can help to focus the mapping:– With no knowledge of the context and of the state of an
interaction, a received message can be anything– the context can be used to guess the possible content of
messages, filtering out unrelated elements– the guessed content is suggested to the ontology mapping
engine
• The entities in a received message mi(e1,...,en) are bound by the context of the interaction:– some entities are specific to the interaction type (purchase,
request of information,...),– the set of possible entities is bound by concepts previously
introduced in the interaction,– different entities may appear in a specific message with different
frequencies
Implementation
• Creating the model:– Entities appearing in messages are counted, obtaining their prior
and conditional frequencies – Ontological relations between entities in different messages are
checked and the verified relations are counted
• Predicting the content of a message:– When a message is received, the probability distribution for all the
terms is computed using the collected information and the current state of the interaction
– The most probable terms form the set of suggestions for the ontology mapping engine
Two phases:
The aim is to obtain the smallest possible set that is most likely to contain the entities actually used in the message.
Mapping Evaluation Framework
Testing
• Interactions are abstract protocols, and agents have generated ontologies– allows us to simulate different types of relations between the
messages
• Community preferences over elements (best sellers, etc) are simulated by probability distributions
• Interactions are run automatically hundreds of times• Results are compared with a uniform distribution of the
entities (simulates no knowledge about context)– Equivalent size for same success rate – Equivalent success rate for same size of suggestion set
Provisional Results
• After 100 interactions, the predictor is able to provide a set smaller than 7% of the ontology size containing, 70% of the time, the term actually used in message m2
• If all terms are equiprobable, the probability is directly proportional to the size of the (randomly picked) set, as shown above.
Consistency Peer - Environment
P M(P,R)
P1
Pn
EP
EP1
EPn
K(EP) K(P)
AK(P) BK(EP) → (A B)
Specific Question
• Suppose we have a complex environment with adversorial agents
• For specific goals, how complex do interaction models need to be in order to raise group performance significantly?
Environment Simulation Framework
Groupconvergence random coordinated
Comparativeperformance
Environmentsimulator
Simulated agents
Interaction model
Coordinating peer
a(hunter,Id):: sawHimAt(Location) => a(hunter,RID) visiblePlayer(Location) and strafeAttempt(Location,Location) or strafeAttempt(Location,Location) sawHimAt(Location) <= a(hunter,RID) or movementAttempt(random_play)
You can be a hunter if you send a message revealing the location of a visible opponent player upon whom you are making a strafing attack or make a strafing attack on a location if you have been told a player is there or otherwise just do what seems right