T0. Multiagent Systems and Electronic Institutions

50
EASSS 2012. Carles Sierra. IIIA-CSIC Multiagent Systems and Electronic Institutions Carles Sierra IIIA - CSIC EASSS 2012. Carles Sierra. IIIA-CSIC Contents 1) Introduction 2) Communication 3) Architecture 4) Organisation (Electronic Institutions)

description

14th European Agent Systems Summer School

Transcript of T0. Multiagent Systems and Electronic Institutions

Page 1: T0. Multiagent Systems and Electronic Institutions

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

C

Multiagent Systemsand

Electronic Institutions

Carles SierraIIIA - CSIC

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

C

Contents

1) Introduction

2) Communication

3) Architecture

4) Organisation (Electronic Institutions)

Page 2: T0. Multiagent Systems and Electronic Institutions

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

C

1) Introduction

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

C

Vision• Five ongoing trends in computing:

– ubiquity;

– interconnection;

– intelligence;

– delegation; and

– human-orientation.

• Programming has progressed through:

– sub-routines;

– procedures & functions;

– abstract data types;

– objects;

to AGENTS

Page 3: T0. Multiagent Systems and Electronic Institutions

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

CSpacecraft control

When a space probe makes its long flight from Earth to the outer planets, a ground crew is usually required to continually track its progress, and decide how to deal with unexpected eventualities. This is costly and, if decisions are required quickly, it is simply impracticable. For these reasons, organisations like NASA are seriously investigating the possibility of making probes more autonomous -giving them richer decision making capabilities and responsibilities.

This is not science fiction: NASA’s DS1 was doing this!

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

C

Internet agents

Searching the internet for the answer to a specific query can be a long and tedious process. So, why not allow a computer program -an agent- do searches for us? the agent would be typically be given a query that would require synthesizing pieces of information from various different internet information sources. Failure would occur when a particular resource was unavailable, (perhaps due to network failure), or where results could not be obtained.

Page 4: T0. Multiagent Systems and Electronic Institutions

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

CWhat is an agent?

A computer system capable of autonomos action in some environment.

Trivial agents: thermostat, Unix daemon (e.g. biff)

An agent must be capable of flexible autonomous action:

– Reactive: continuous interaction with the environment reacting to changes occurring in it.

– pro-active: to take the initiative by generating and pursuing goals

– Social: to interact with other agents via some agent communication language

AGENT

ENVIRONMENT

INPUT OUTPUT

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

C

Reactivity

• If the environment of a program is guaranteed to be fixed the program doesn’t need to worry about its success or failure -the program executes blindly (v.g. a compiler).

• The real world is not like this: things change, information is incomplete. Most interesting environments are dynamic.

• Software is hard to build for dynamic domains: programs must take into account the possibility of failure -ask itself whether it is worth executing!

• A reactive system is one that maintains an ongoing interaction with its environment, and responds to changes that occur in it (in time for the response to be useful).

Page 5: T0. Multiagent Systems and Electronic Institutions

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

CProactiveness

• Reacting to an environment is easy (v.g. stimulus -> response rules).

• We generally want agents to do things for us -> goal-directed behaviour.

• Pro-activeness = generating and attempting to achieve goals; not driven solely by events; taking the initiative.

• Recognising opportunities.

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

C

Social ability

• The real world is a multi-agent environment: we cannot go around attempting to achieve goals without taking others into account.

• Some goals can only be achieved with the cooperation of others.

• Similarly for many computer environments: witness the internet.

• Social ability in agents is the ability to interact with other agents (and possibly humans) via some kind of agent-communication language, and perhaps cooperate with others.

Page 6: T0. Multiagent Systems and Electronic Institutions

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

COther properties

• Sometimes other properties are required:

– Mobility: the ability of an agent to move around an electronic network;

– Veracity: an agent will not knowingly communicate false information;

– Benevolence: agents do not have conflicting goals, and that every agent will therefore always try to do what is asked for.

– Rationality: agent will act in order to achieve its goals, and will not act in such a way as to prevent its goals being achieved -at least insofar as its beliefs permit.

– Learning/adaptation: agents improve performance over time.

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

C

Agents as intentional systems• When explaining human activity it is often useful to describe it as for instance:

Carme took her umbrella because she believed it was going to rain

Carles worked hard because he wanted to finish the paper on time

• Human behaviour is predicted and explained through the attribution of attitudes, such as believing, wanting, hoping, fearing, …These attitudes are called intentional notions.

• Daniel Dennett coined the term intentional system to describe entities ‘whose behaviour can be predicted by the method of attributing belief, desires anb rational acumen’. Dennet identifies different grades of intentional system:

‘A first-order intentional system has beliefs and desires (etc.) but no beliefs and desires about beliefs and desires … A second-order intentional system is more sophisticated; it has beliefs and desires (and no doubt other intentional states) about beliefs and desires (and other intentional states) - both those of others and its own’.

• Is it legitimate or useful to attribute beliefs, desires and so on, to a computer system?

Page 7: T0. Multiagent Systems and Electronic Institutions

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

CIntentional stance

• Almost every object can be described by the intentional stance:

‘It is perfectly coherent to treat a light switch as a (very cooperative) agent with the capability of transmitting current at will, who invariably transmits current when it believes that we want it transmitted and not otherwise; flicking the switch is simply our way of communicating our desires’ (Yoav Shoham)

• Most adults would consider this description as absurd. Why?

• The answer seems to be that while the description is consistent

… it does not buy us anything, since we essentially understand the mechanism sufficiently to have a simpler mechanism description of its behaviour. (Yoav Shoham)

• Put crudely, the more we know about a system, the less we need to rely on animistic, intentional explanations of its behaviour.

• But with very complex systems, a mechanistic, explanation of its behaviour may not be practicable.

As computer systems become ever more complex, we need more powerful abstractions and metaphors to explain their operation -low level explanations become impractical. The intentional stance is such abstraction.

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

C

Intentional systems• Intentional notions are (further) abstraction tools as have been in the past:

procedural abstraction, abstract data types, object.

So why not use the intentional stance as an abstraction tool in computing -to explain, understand, and, crucially, program computer systems.

• Helps in characterising agents

– Provides us with a familiar non-technical way of understanding and explaining agents.

• Helps in nested representations

– Provides us with a means to include representations of other systems. Which is considered to be essential for agents to cooperate.

• Leads to a new paradigm: post-declarative systems

– Procedural programming: How to do something.

– Declarative programming: What to do. But how as well.

– Agent programming: What to do. No buts. Theory of agency determines the control.

Page 8: T0. Multiagent Systems and Electronic Institutions

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

C Exemple 1: MASFITE

AS

SS

201

2. C

arle

s S

ierr

a.III

A-C

SIC

Real worldAuction Software

Real Auction house

Virtual world

Buyer Agents

Real worldAuction Software

Real Auction house Human buyers

FM+

ElectronicInstitution

MASFIT

Page 9: T0. Multiagent Systems and Electronic Institutions

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

CExemple 2: Entertainment

• 3D animation system for generating crowd related visual effects for film and television.• Used in the epic battle sequences for The Lord Of The Rings film trilogy AGENTS OF DESTRUCTION:Thousands of digitally created fighters clash with humanity in the Battle of Helm’s Deep.

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

C

Massive. Build the agent

Massive agents begin life rendered as three-dimensional characters, not the stick figures of older software. Each body part has a size and a weight that obeys the laws of physics.

Page 10: T0. Multiagent Systems and Electronic Institutions

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

CMassive. Inset Motion

Movement data gleaned from human actors performing in a motion-capture studio is loaded into the Massive program and associated with a skeleton. The programmer can fine-tune the agents' movements using the controls on the bottom of the screen.

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

C

Massive. Create the brain

The core of every agent is its brain, a network of thousands of interconnected logic nodes. One group of nodes might represent balance, another the ability to tell friend from foe. The agent's brain relies on fuzzy logic.

Page 11: T0. Multiagent Systems and Electronic Institutions

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

CMassive. Replicate

When a variety of agents have been developed, copies are created from each blueprint, then the copies are tweaked to give each fighter a unique mix of various characteristics—height, aggressiveness, even dirtiness. The 50,000 characters in the scene will act as individuals. They are placed into a battlefield grid for testing.

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

C

Massive attack

The simulations begin. Since agents are programmed with tendencies, not specific tasks to accomplish, there is no guarantee they will behave as the director needs them to. Only through a trial-and-error process, in which the characters' programs are constantly adjusted, is the battle scene finalized.

Page 12: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

2) Communication

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

Speech acts

• Most treatments of communication in MAS take inspiration on speech act theory

• Speech act theories are pragmatic theories of language, i.e. Theories of language use: they attempt to account for how language is used by people every day to achieve their goals and intentions

• Origin: Austin’s 1962 book, how to do things with words.

Page 13: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.Speech acts

• Austin noticed that some utterances are rather like ‘physical actions’ that appear to change the state of the world.

• Paradigm examples are:– Declaring war– Christening– ‘I now pronounce you man and wife’

• But more generally, everything we utter is uttered with the intention of satisfying some goal or intention.

• A speech act theory is a theory of how utterances are used to achieve intentions.

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

Searle• Identified different types of speech act:

– Representatives: such as informing, e.g., ‘it is raining’

– Directives: attempts to get the hearer to do something, e.g., ‘please make the tea’

– Commisives: which commit the speaker to doing something, e.g., ‘I promise to …’

– Expressives: a speaker expresses a mental state, e.g., ‘thank you!’

– Declarations: such as declaring war or christening.

Page 14: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.Components

• In general, a speech act can be seen as having two components:– A performative verb: (e.g. request, inform, …)– Propositional contents: (e.g. ‘the door is closed’)

• Examples– Performative = request

Content = ‘the door is closed’Act = ‘please close the door’

– Performative = informContent = ‘the door is closed’Act =‘the door is closed!’

– Performative = inquireContent = ‘the door is closed’Act = ‘is the door closed?’

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

Plan-based semantics

• Cohen and Perrault (1979) used the precondition-delete-add formalism from planning to give semantics

• Request(s,h,f)pre – (s believe (h can do f)) AND (s believe (h believe (h can do f)))– (s believe (s want f))

post (h believe (s believe (s wants f)))

Page 15: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.KQML+KIF

• KQML = Outer language defining performatives.– Ask-if (‘is it true that …’)– Perform (‘please perform the following action …’)– Tell (‘it is true that …’)– Reply (‘the answer is …)

• KIF = inner language for content.

• Examples:(ask-if (> (size chip1) (size chip2)))(reply true)(inform (= (size chip1) 20))(inform (= (size chip2) 18))

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

FIPA ACL

• 20 performatives

• Similar structure to KQML(inform :sender agent1 :receiver agent5 :content (price good200 150) :language s1 :ontology trade)

Page 16: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.Inform and request

• ‘inform’ and ‘request’ are the two basic performatives, the rest are macros

• Meaning:– Precondition: what must be true for the speech act to succeed– Rational effect: what the sender hopes to bring about

• Inform: content is a statement, precondition is that sender holds that the content is true, intends the recipient to believe it and does not believe that the recipient is aware of whether the content is true or not.

• Request: content is an action, precondition is that sender intends the action to be performed, believes recipient is capable of doing it and does not believe that receiver already intends to perform it.

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

Social commitments semantics

• Criticism to the mentalist approach.

• Semantics of illocutions are given by the social commitments/expectations they create.

• Illocutions make the state of commitments evolve.

Page 17: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

3) Architecture

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

Agent Architectures• Three types:

– 1956-1985 symbolic agents– 1985-present reactive agents– 1990-present hybrid agents

• Agent architectures are:A particular methodology for building agents. It specifies how … the agent

can be decomposed into the construction of a set of component modules and how these modules should be made to interact. The total set of modules and their interactions has to provide an answer to the question of how the sensor data and the current internal state of the agent determine the actions … and future internal state of the agent. An architecture encompasses techniques and algorithms that supports this methodology (Pattie Maes)

A particular collection of software (or hardware) modules, typically designated by boxes with arrows indicating the data and control flow among the modules. A more abstract view of an architecture is as a general methodology to designing particular modular decompositions for particular tasks. (Kaelbling).

Page 18: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.Symbolic agents

• They contain an explicitly represented symbolic model of the world• They make decisions (for example about what actions to perform) via

symbolic reasoning.They face two key problems:

– The transduction problem. Translate the real world into an accurate representation in time for it to be useful

– The representation/reasoning problem. How to represent information about real world entities and processes, and how to get agents to reason with this information in time for the results to be useful.

• These problems are far from being solved• Underlying problem: symbol manipulation algorithms are usually highly

intractable.

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

Deductive reasoning agents• Idea: Use logic to encode a theory stating the best action to perform in

any situation:! is the theory (usually a set of rules)" a logical database that describes the current state of the worldAc the set of actions the agent can perform" |#! $ actions means that $ can be proved " using !.

%try to find an action explicitly prescribedfor each a in Ac do if " |#! Do(a) then return a endifendfor%try to find an action not excludedfor each a in Ac do if " |#/! ∼Do(a) then return a endifendfor

Page 19: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.Example

• The vacuum world. Goal is the robot top clear up all dirt.

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

Agent architecture• 3 predicates

In(x,y) Agent is at (x,y)Dirt(x,y) there is dirt at (x,y)Facing(d) the agent is facing direction d

• 3 actionsAc = {turn, forward, suck} %turn means turn right

• Rules ! for determining what to do:In(0,0) and Facing(north) and ~Dirt(0,0) then Do(forward)In(0,1) and Facing(north) and ~Dirt(0,1) then Do(forward)In(0,2) and Facing(north) and ~Dirt(0,2) then Do(turn)In(0,2) and Facing(east) then Do(forward)And so on!

• With these rules and other obvious ones the robot will clean it up.

Page 20: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.Problems

• How to convert a video stream into Dirt(0,1)?

• Decision making assumes a static environment

• Decision making using first order logic is undecidable

• Even when using propositional logic decision making in the worst case means NP-completeness.

• Typical solutions– Weaken the logic– Use symbolic, non-logical representations

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

Agent0

• Shoham’s notion of agent oriented programming (AOP)

• AOP is a ‘new programming paradigm, based on a societal view of computation’.

• The key idea of AOP is that of directly programming agents in terms of intentional notions like belief, commitment and intention

• Motivation: in the same way that we use the intentional stance to describe humans, it might be useful to use the intentional stance to program machines.

Page 21: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.AOP

• 3 components for an AOP– A logic for specifying agents and describing their mental

states– An interpreted programming language for programming

agents– An ‘agentification’ process for converting ‘neutral

applications’ (e.g. Databases) into agents

• Results were reported just on the first two components.

• Relation between the first two = semantics.

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

Agent0

• Extension to LISP

• Each agent has– A set of capabilities – A set of initial beliefs– A set of initial commitments, and– A set of commitment rules

• The key element that determines how the agent acts is the commitment rule set.

Page 22: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.Commitment rules

• Each commitment rule has– A message condition– A mental condition, and– An action

• On each ‘agent cycle’– The message condition is matched against the received messages– The mental condition is matched against the beliefs– If the rule fires the agent becomes committed to the action (it is

added to the agent’s commitment set)

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

Commitment rules

• Actions may be – Private: an internally executed computation, or– Communicative: sending messages

• Messages are constrained to be one of – ‘requests’ to commit to action– ‘unrequests’ to refrain from actions– ‘informs’ which pass on information

Page 23: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.Agent0 Control

Initialize

Update beliefs

Update commitments

Execute

Commitments

Beliefs

Abilities

Messages out

Internal actions

Messages in

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

A commitment rule

Commit( ( agent, REQUEST, DO(time, action) ), ;;; msg condition ( B [now, friend agent] AND CAN(self, action) AND NOT [time, CMT(self, anyaction)] ), ;;;mental condition self, DO(time, action))

Page 24: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.Reactive architectures

• Symbolic AI has many unsolved problems basically on complexity

• Many researchers have argued about the viability of the whole approach (Chapman)

• Although they share a common belief that mainstream AI is in some sense wrong they have very different approaches

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

Brooks-behaviour languages

• Three theses:– Intelligent behaviour can be generated without explicit

representations of the kind that symbolic AI proposes– Intelligent behaviour can be generated without explicit abstract

reasoning of the kind that symbolic AI proposes.– Intelligence is an emergent property of certain complex systems.

• Two ideas inspiring him:– Situatedness and embodiment: ‘real’ intelligence is situated in the

world, not in disembodied systems like theorem provers of KBS.– Intelligence and emergence: ‘intelligent’ behaviour arises as a

result of the interaction with its environment.

Page 25: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.Subsumption architecture

• Hierarchy of task-accomplishing behaviours

• Each behaviour is a simple rule-like structure

• Behaviours compete with others to control the agent

• Lower layers represent more primitive kinds of behaviour (such as avoiding obstacles) and have preference over higher levels

• The resulting system in extremely simple in terms of the amount of computation to perform

• Some of the robots do impressive tasks from the perspective of symbolic AI

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

Mars explorer system

• Steel’s system uses the subsumption architecture and obtains nearly optimal behaviour– The objective is to explore a distant planet, and in particular, to

collect sample of a precious rock. The location of the samples is not known in advance, but it is known that they tend to cluster

• If detect an obstacle then change direction• If carrying samples and at the base then drop samples• If carrying samples and not at the base then travel up gradient• If detect a sample then pick sample up• If true then move randomly

Page 26: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.Hybrid architectures

• The compromised position, two subsystems:– A deliberative one, containing a symbolic world model– A reactive one which is capable of reacting without complex

reasoning

• Often the reactive has priority

• Architectures tend to be layered with higher layers dealing with information at increasing levels of abstraction

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

Layers

Layern

...Layeri

...

Layer1

Layern

...Layeri

...

Layer1

Layern

...Layeri

...

Layer1

Perceptualinput

Perceptualinput

Perceptualinput

Action output

Action output

Action output

Horizontal layering Horizontal layeringTwo pass control

Vertical layeringOne pass control

Page 27: T0. Multiagent Systems and Electronic Institutions

EA

SS

S 2

012.

Car

les

Sie

rra.

IIIA

-CSI

C

4) Organisation (Electronic Institutions)

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

54

There are situations where individuals interact in ways that involve

Commitment, delegation, repetition, liability and risk.

These situations involve participants that are:

Autonomous, heterogeneous, independent, not-benevolent, not-reliable, liable.

EI: Motivation

Page 28: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

55

These situations are not uncommon: Markets, medical services, armies and many more.

It is usual to resort to trusted third parties whose aim is to make those interactions effective by establishing and enforcing conventions that standardize interactions, allocate risks, establish safeguards and guarantee that certain intended actions actually take place and unwanted situations are prevented.

These functions have been the basis for the development of many traditional institutions.

They are even more necessary when interaction may involve software agents.

EI: MotivationIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

56

! Open multi-agent systems characteristics:

• populated by heterogeneous agents, developed by different people, using different languages and architectures

• self-interested agents

• participants change over time and are unknown in advance

Introduction

Page 29: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

57

! With the expansion of Internet open multi agent systems represent the most important area of application of multi agent systems.

! Research issue: need for appropriate methodologies and software tools which give support to the analysis, design, and development of open systems.

! Goal: design and development of open multi agent systems.

IntroductionIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

58

! Human institutions have been working successfully for a long time.

! Institutions are created to achieve some goals following determined procedures.

! The institution is the responsible of define the rules of the game, to enforce them and impose the penalties in case of violation.

! Examples: auction houses, parliaments, stock exchange markets,.…

Introduction

Page 30: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

59

ENVIRONMENT

ELECTRONICINSTITUTION

NORMS

AGENT1

AGENT2

AGENT3

AGENT1

AGENT2

AGENT3

Institutions in the sense proposed by North “… set of artificial constraints that articulate agent interactions”.

ApproachIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

60

! Electronic institutions development can be divided into two basic steps:

• Formal specification of institutional rules.

• Execution via an infrastructure that mediates agents’ interactions while enforcing the institutional rules.

! The formal specification focuses on macro-level (rules) aspects of agents, not in their micro-level (players) aspects.

! The infrastructure is required to be of general purpose (can interpret any formal specification).

Approach

Page 31: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

61

" Network of protocols " Multi-agent protocols" Norms" Agent Roles"Common Ontology and language

Electronic Institution Specification with ISLANDER

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

62

PERFORMATIVE STRUCTURE(NETWORK OF PROTOCOLS)

SCENE(MULTI-AGENT PROTOCOL)

AGENT ROLES

Buyers’ Payment

NORMS

Electronic Institution Components

Page 32: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

63

! A simple institution where agents interact simulating a chat.

! Each agent owns a main topic and a list of interested subtopics.

! Agents create a chat room per main topic.

! They can join the scenes created by other agents

! The institution keeps track of active chat rooms to provide information to agents.

The (“Hello World”) Chat ExampleIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

64

! Common ontology

! Valid communication language expressions

• List of illocutionary particles

• Content language

! Roles that agents can play

• Internal Roles

• External Roles

! Role relationships

Dialogic Framework Components

Page 33: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

65

! Each role defines a pattern of behaviour within the institution (actions associated to roles).

! Agents can play multiple roles at the same time

! Agents can change their roles.

! Two types of roles:

• Internal: played by the staff agents to which the institution delegates its services and tasks.

• External: played by external agents.

! Role relationships:

• Static incompatibility (ssd)

• Dynamic incompatibility (dsd)

• Hierarchy (sub)

RolesIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

66

Chat Roles

Page 34: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

67

Chat OntologyIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

68

Chat Dialogic Framework

Page 35: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

69

! CL expressions are formulae of the form (i (!i ri) " # $) where:• i is an illocutionary particle (e.g. request, inform);• !i can be either an agent variable or an agent identifier;

• ri can be either a role variable or a role identifier;• " represents the addressee(s) of the message and can be:

- (!k rk) the message is addressed to a single agent.

- rk the message is addressed to all the agents playing role rk.

- “all” the message is addressed to all the agents in the scene.• # is an expression in the content language.• $ can be either a time variable or a time-stamp

! (request (?x guest) (!y staff) (login ?user ?email))

Communication LanguageIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

70

(request (?x guest) (!y staff) (login ?user ?email))

Roles Communication Language

Page 36: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

71

! Specification level

• A scene is a pattern of multi-agent interaction.

• Scene protocol specified by a finite state oriented graph where the nodes represent the different states and oriented arcs are labelled with illocution schemes or timeouts.

! Execution level• Agents may join or leave scenes.

• Each scene keeps the context of its multi-agent interaction.

• A scene can be multiply executed and played by different groups of agents.

ScenesIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

72

Guest admission scene

Page 37: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

73

Guest admission scene. State information

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

74

1. (request (?x guest) (?y staff) login(?user ?email)) )

2. (inform (!y staff) (!x guest) accept()) )

3. (failure (!y staff) (!x guest) deny(?code)) )

4. (request (?x guest) (!y staff) login(?user ?email)) )

5. (inform (!y staff) (all guest) close()) )

6. (inform (?y staff) (all guest) close()) )

Guest admission scene. Illocutions

Page 38: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

75

! Illocution schemes: at least the terms referring to agents and time are variables.

! Semantics of variable occurrences:

• ?x: variable x can be bound to a new value.

• !x: variable x must be substituted by its last bound value.

Example:

(request (?x guest) (!y staff) login(?user ?email)) )

! Context of a conversation captured on a list of variables’ bindings.

ScenesIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

76

! Two agents in the scene: John as guest and Mike as staff

! Agent John utters an illocution:

(request (John guest) (Mike Staff) login(John [email protected]) )

! The illocution matches arc 1 and the scene evolves to W1.

! Substitutions:

[?x/John, ?y/Mike, ?user/John, ?email/[email protected]]

1. (request (?x guest) (?y staff) login(?user ?email)))

6. (inform (?y staff) (all guest) close()) )

Guest admission scene. Illocutions

Page 39: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

77

! Former bindings:

[?x/John, ?y/Mike, ?user/John, ?email/[email protected]]

! Only illocutions matching the following schemes will be valid:

(inform (Mike staff) (John guest) accept()) )

(failure (Mike staff) (John guest) deny(?code)) )

2. (inform (!y staff) (!x guest) accept()) )

3. (failure (!y staff) (!x guest) deny(?code)) )

Guest admission scene. IllocutionsIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

78

! Constraints capture how past actions in a scene affect its future evolution:

• restricting the valid values for a variable

• restricting the paths that a conversation can follow

! Examples:

• A buyer can only submit a single bid at auction time.

• A buyer must submit a bid greater than the last one.

• An auctioneer can not declare a winner if two buyers have submitted a bid at the higher value.

• An agent can not repeat an offer during a negotiation process.

Scenes Constraints

Page 40: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

79

! Example• Illocution scheme: commit((?y buyer) (!x auctioneer) bid(!good_id, ?price))

?price % (0.+&)

• Constraint: (> ?price !starting_price) ?price % (!starting_price.+&)

0 +&

!starting_price0 +&

SceneIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

80

! ?x: binding occurrence

! !x: stands for the last binding of variable x.

! !x (wi wj): stands for the bindings of variable x the last time that the

conversation evolve from wi wj.

! !x (wi wj i): stands for the bindings of variable x the last i times that

the conversation evolve from wi wj.

! !x (wi wj *): stands for the bindings of variable x all the times that the

conversation evolve from wi wj.

Variable Occurrences

Page 41: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

81

1 (inform (?x auctioneer) (all buyer) startauction(?a) ) 2 (inform (!x auctioneer) (all buyer) startround(?good ?price ?bidding_time) )3 (inform (!x auctioneer) (all buyer) offer(!good !price) ) 4 (request (?y buyer) (!x auctioneer) bid(!good ?bid_price) )5 [!bidding_time] )6 (inform (!x auctioneer) (all buyer) sold(!good ?sold_price ?buyer_id) ) 8 (inform (!x auctioneer) (all buyer) close() )7 (inform (!x auctioneer) (all buyer) withdrawn(!good) )

Example: Vickrey auctionIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

82

1 (inform (?x auctioneer) (all buyer) startauction(?a) )

2 (inform (!x auctioneer) (all buyer) startround(?good

?price ?bidding_time) )

3 (inform (!x auctioneer) (all buyer) offer(!good !price) )

4 (request (?y buyer) (!x auctioneer) bid(!good ?bid_price) )

5 [!bidding_time]

6 (inform (!x auctioneer) (all buyer)

sold(!good ?sold_price ?buyer_id) )

8 (inform (!x auctioneer) (all buyer) close() )

7 (inform (!x auctioneer) (all buyer) withdrawn(!good) )

Constraint :

?y ! !y (w3 w4) ?bid_price > !price

Constraints

Page 42: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

83

1 (inform (?x auctioneer) (all buyer) startauction(?a) )

2 (inform (!x auctioneer) (all buyer) startround(?good

?price ?bidding_time) )

3 (inform (!x auctioneer) (all buyer) offer(!good !price) )

4 (request (?y buyer) (!x auctioneer) bid(!good ?bid_price) )

5 [!bidding_time]

6 (inform (!x auctioneer) (all buyer)

sold(!good ?sold_price ?buyer_id) )

8 (inform (!x auctioneer) (all buyer) close() )

7 (inform (!x auctioneer) (all buyer) withdrawn(!good) )

Constraint :

| !y (w3 w4)| = 0

ConstraintsIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

84

! Complex activities can be specified by establishing relationships among scenes that define:

• causal dependency (e.g. a guest agent must go through the admission scene before going to the chat rooms)

• synchronisation points (e.g. synchronise a buyer and a seller before starting a negotiation scene)

• parallelisation mechanisms (e.g. a guest agent can go to multiple chat rooms)

• choice points (e.g. a buyer leaving an admission scene can choose which auction scene to join).

• the role flow policy.

Performative Structure

Page 43: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

85

! Performative structures as networks of scenes.

! Transitions to link scenes.

! Arcs connecting scenes and transitions labelled with constraints and roles.

! Agents moving from a transition to a scene may join one, some or all current executions of the target scene(s) or start new executions.

! The specification allows to express that simultaneous executions of a scene may occur.

Performative StructureIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

86

Scene

Chat Performative Structure

Page 44: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

87

And transition: synchronisation and parallelisation point

Agent

Chat Performative StructureIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

88

And transition: synchronisation and parallelisation point

Chat Performative Structure

Page 45: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

89

Or transition: choice point

Chat Performative StructureIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

90

Or transition: choice point

Chat Performative Structure

Page 46: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

91

XOr transition: exclusive choice point

Chat Performative StructureIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

92

XOr transition: exclusive choice point

Chat Performative Structure

Page 47: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

93

Chat Performative StructureIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

94

Chat Performative Structure

Page 48: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

95

Arcs connecting transitions to scenes determine whether agents join one, some or all current executions of the target scene(s) or whether new executions are started.

Chat Performative StructureIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

96

! Norms define the consequences of agents actions within the institution.

! Such consequences are captured as obligations.

• Obl(x,',s): meaning that agent x is obliged to do ' in scene s.

! Norms are a special types of rules specified by three elements:

• Antecedent: the actions that provoke the activation of the norm and boolean expressions over illocution scheme variables.

• Defeasible antecedent: the actions that agents must carry out in order to fulfil the obligations.

• Consequent: the set of obligations

! Actions expressed as pairs of scene and illocution schema.

Norms

Page 49: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

97

NormsIII

A-C

SIC

NE

GO

+EI T

utor

ial.

Cou

rse.

Car

les

Sie

rra.

98

Electronic Institutions Definition

Page 50: T0. Multiagent Systems and Electronic Institutions

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

99

http://e-institutions.iiia.csic.es

EIDE TEAM

Josep Ll. Arcos, David de la Cruz,Marc Esteva, Andrés García, Pablo Noriega, Bruno Rosell, Juan A. Rodríguez-Aguilar, Carles Sierra,

Wamberto Vasconcelos

Electronic Institutions Development Environment

IIIA

-CSI

CN

EG

O+E

I Tut

oria

l. C

ours

e. C

arle

s S

ierr

a.

100

! Engineering open multi-agent systems is a highly complex task.

! MAS decouple the internal AI in the agents and the communication infrastructure.

! MAS is a technology that generalises other distributed approaches: GRID, P2P, CLOUD.

! Electronic institutions reduce the programming complexity by introducing normative (regulatory) environments.

Conclusions