Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

18
Intelligent and Adaptive Systems Group Seminar 28 th September 2006

Transcript of Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Page 1: Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Intelligent and Adaptive Systems Group Seminar

28th September 2006

Page 2: Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Aim of research

• Reduce the uncertainty of agent interactions caused by:– poor performance: substandard quality and tendency for

agents to change goals

– competition: unreliability of agents due to information-hiding

– malicious behaviour: deliberate harm caused by agents through lying and collusion

2

Page 3: Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Trust and other notions

• Existing research in trust has mostly looked at the positive side of trust and at the two faces of trustworthiness: trust and no trust

• Negative trust is a motivational force (Marsh, 2005)

• Untrust: measure of how little an agent is positively trusted

[Marsh, S. and Dibben, M., Trust, untrust, distrust and mistrust – an

exploration of the dark(er) side, iTrust 2005, LNCS 3477, 2005 ]

3

Page 4: Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Trust and other notions

• Distrust: measure of how much an agent believes another agent will actively work against its interests

• Undistrust (Griffiths, 2005): lies between distrust and untrust and meaning that an agent is not completely distrusted but still had negative trustworthiness

[Griffiths, N., Enhancing peer-to-peer collaboration using trust,

International Journal of Expert Systems with Applications (to appear),

2006]

4

Page 5: Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Trust: experience-based

• It is directly relevant to the requesting agent

• Issues:– low number of previous interactions or no previous

interaction

– previous interactions not completely relevant due to differing services and conditions

– dishonest behaviour by target agent

5

Page 6: Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Trust: recommendation-based

• The opinions of third party agents can be requested, directly or indirectly

• Recommendations resolve some of the issues linked with direct interactions

• Issues:– Inaccurate reporting

– Decrease in the reliability with increase in length of the recommendation chain

6

Page 7: Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Trust: multi-dimensional

• More accurately represent the different aspects of trust

• Examples are the dimensions of success, cost, timeliness, quality (Griffiths, 2005)

• Other dimensions are competence, disposition, dependence, fulfilment (Castelfranchi, 1998)

[Griffiths, N., Task delegation using experience-based multi-dimensional trust.

AAMAS 2005, 2005]

[Castelfranchi, C. and Falcone, R., Principles of trust in MAS: Cognitive

anatomy, social importance, and quantification, ICMAS 1998, 1998]

7

Page 8: Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Trust: confidence

• It is a measure of how accurate a piece of information is thought to be

• Associated with both experience-based and recommendation-based trust

• For instance, the more interactions an agent has with a target agent, the higher the confidence in its trust in that agent

8

Page 9: Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Trust: reputation

• It is the overall opinion of a set of agents about the trustworthiness of a target agent

• The overall opinion is formed from the combined recommendations requested and is affected by who is requesting the information

• Compared to eBay, the reputation is not unique and accessible to all members of the system

9

Page 10: Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Social relationships

• Relationships are mainly formed when agents interact with other agents

• The identification of relationships among agents can help to reduce the uncertainty of interactions

• Types of relationships:– Underlying affiliation

– Group of agents likely to cooperate

– Competitors

– Collusive behaviour among a group of agents

10

Page 11: Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Social relationships: illustration

11

Page 12: Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Trust Model: challenges

12

• Our trust model would have all the necessary features to enable

agents to:– assess the trustworthiness of potential interaction partners

from past experience

– combine past experience with third-party recommendations to obtain the reputation of other agents

– identify and use social relationships among agents to better understand agent motivations and behaviour while providing services and information

– accurately update the trustworthiness information of agents to reflect changes

Page 13: Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Trust Model: challenge illustration I

13

• Improve the decision-making of requesting agents faced with

the agents with poor performance

• Scenario:– Agent x needs to buy car parts and these are supplied by

agents t1 and t2

– Agent x has had previous interactions with both

• Usage:– Agent x uses its experience-based trust in agent t1 to assess

whether to rely on t1

– Agent x untrusts t1 and thus needs to assess t2's trustworthiness

Page 14: Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Trust Model: challenge illustration II

14

• Improve the identification by requesting agents of lower

performing agents which are highly recommended

• Scenario:– Agent x needs to buy car parts,supplied by agent t1

– Agent x has had no previous interactions with t1

• Usage:– Agent x uses its recommendation-based trust in agent r1 to

ask for its opinion about t1; r1 returns a high trust in t1

– Publicly available information reveal that r1 and t1 are owned by the same company

– Agent x can use this information to adjust its trust in t1

Page 15: Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Trust Model: challenge illustration III

15

• Improve the identification by requesting agents of collusive

agents which may defame a target agent

• Scenario:

– Agent x needs to buy car parts, supplied by agent t1– Agent x has had no previous interactions with t1

• Usage:

– Agent x assesses the reputation of t1 by asking agents r1, r2 and r3 to give their opinions about t1

– Agents r1, r2 and r3 collude to all give a high distrust recommendation for t1, which is worse than t1's real worth

– Relationships identifying the recommenders as competitors of t1 would help x to be aware of the collusion possibility

Page 16: Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Trust Model: challenge illustration IV

16

•Comparison between potential interaction partners from the trust

values from past experience and from recommendations

•Scenario:–Agent x wants to assess the trustworthiness of agents t1 and

t2 for the next urgent order of car parts

–Agent x has interacted once with t1 and has a very high trust in t1 in all dimensions

–Agent x has interacted 10 times with t2 and has a low trust in t2 for its timeliness of delivery

Page 17: Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Trust Model: challenge illustration IV

17

• Usage:–Using confidence to measure reliability of trust value, x

can use the higher number of interactions with t2 as an indication of its higher reliability

–By assessing trust dimensions separately, agent x can make the decision to rely on t2 Usage:

–Using confidence to measure reliability of trust value, and assessing trust dimensions separately, x can make the decision to rely on t2 is not as trustworthy in terms of delivery as t1, but combining this with the above information, t2 can be seen to be the less risky choice

Page 18: Intelligent and Adaptive Systems Group Seminar 28 th September 2006.

Trust Model: early design stage

18

• Trust representation

• Reputation evaluation

• Information gathering and analysis about underlying

affiliations among agents