Thesis Proposal: Decentralized Decision-Making for Real...

120
Thesis Proposal: Decentralized Decision-Making for Real Robot Teams Based on POMDPs João Vicente Teixeira de Sousa Messias Instituto de Sistemas e Robótica Instituto Superior Técnico Members of the Advisory Committee: Prof. Pedro Manuel Urbano de Almeida Lima (Supervisor ) Dr. Matthijs Theodor Jan Spaan (co-Supervisor ) Dr. Francisco António Chaves Saraiva de Melo Prof. Nikos Vlassis March 2012

Transcript of Thesis Proposal: Decentralized Decision-Making for Real...

Page 1: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

Thesis Proposal:

Decentralized Decision-Making

for Real Robot Teams Based on

POMDPs

João Vicente Teixeira de Sousa Messias

Instituto de Sistemas e Robótica

Instituto Superior Técnico

Members of the Advisory Committee:

Prof. Pedro Manuel Urbano de Almeida Lima (Supervisor)

Dr. Matthijs Theodor Jan Spaan (co-Supervisor)

Dr. Francisco António Chaves Saraiva de Melo

Prof. Nikos Vlassis

March 2012

Page 2: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

ii

Page 3: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

Abstract

Although decision-theoretic (DT) frameworks provide powerful, well estab-

lished tools for planning under uncertainty, their use in real-world appli-

cations, outside of the academic environment, has been relatively limited.

This work aims at the application of DT frameworks to real-world scenarios

in cooperative robotics. To this end, both theoretical and practical issues

involved in modeling realistic multiagent decision problems will be adressed.

The family of different existing DT approaches will be briefly reviewed, and

examples of reported applications in real-world problems will be presented,

alongside potential implementation areas. In particular, the framework of

Partially Observable Markov Decision Processes (POMDP) will be shown

to fulfill most of the requirements of real problems in cooperative robotics,

through a realistic example. The shortcomings of DT frameworks when deal-

ing with reactive, dynamic multiagent scenarios will be identified. Current

work, focusing on efficient communication policies for multiagent POMDPs,

will be discussed. Finally, future research directions will be proposed, with

the goal of achieving a full scale, real-world implementation of a multiagent

POMDP, and also of providing guidelines for the implementation of these

frameworks to other scenarios.

Page 4: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

iv

Page 5: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

Contents

List of Figures v

List of Tables ix

1 Introduction 1

1.1 Motivation: Planning Under Uncertainty in the Real World . . . . . . . 1

1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Background 5

2.1 Markov Decision Processes . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.1.1 Finding Agent Policies in MDPs . . . . . . . . . . . . . . . . . . 8

2.2 Extensions for Continuous-Time Problems . . . . . . . . . . . . . . . . . 11

2.2.1 Semi-Markov Decision Processes . . . . . . . . . . . . . . . . . . 11

2.2.2 Continuous-Time Markov Decision Processes . . . . . . . . . . . 15

2.2.3 Generalized Semi-Markov Decision Processes . . . . . . . . . . . 15

2.3 Extensions for Partially Observable Domains . . . . . . . . . . . . . . . 18

2.3.1 Partially Observable Markov Decision Processes . . . . . . . . . . 19

2.3.1.1 Finding agent policies in POMDPs . . . . . . . . . . . . 20

2.3.2 Partially Observable Semi-Markov Decision Processes . . . . . . 22

2.4 Extensions for Multiagent Decision-Making Problems . . . . . . . . . . . 23

2.4.1 Decentralized Partially Observable Markov Decision Processes . . 23

2.4.2 Modeling Communication . . . . . . . . . . . . . . . . . . . . . . 26

2.4.3 Factored Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

i

Page 6: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

CONTENTS

3 On the Practical Implementation of MDPs and Related Models 31

3.1 A Review of MDP-Based Applications . . . . . . . . . . . . . . . . . . . 32

3.2 POMDPs for Real Teams of Robots: A Small-Scale Example . . . . . . 36

3.2.1 Description of the Case Study . . . . . . . . . . . . . . . . . . . . 37

3.2.2 Identifying an Appropriate DT Framework . . . . . . . . . . . . . 37

3.2.3 Modeling States, Actions, and Observations . . . . . . . . . . . . 39

3.2.4 Obtaining the Stochastic Models . . . . . . . . . . . . . . . . . . 47

3.2.5 Defining the Reward Model . . . . . . . . . . . . . . . . . . . . . 49

3.2.6 Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3.2.7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

3.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

4 Efficient Communication in Partially Observable Domains 59

4.1 Exploiting Sparse Dependencies in Multiagent POMDPs . . . . . . . . . 60

4.2 Decision-making with Factored Beliefs . . . . . . . . . . . . . . . . . . . 61

4.3 An illustrative example: the Relay problem . . . . . . . . . . . . . . . . 63

4.4 Formal model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

4.4.1 Value Bounds Over Local Belief Space . . . . . . . . . . . . . . . 65

4.4.2 Dealing With Locally Ambiguous Actions . . . . . . . . . . . . . 67

4.4.3 Mapping Local Belief Points to Communication Decisions . . . . 70

4.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

4.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

5 Future Work 77

5.1 Event-Driven POMDPs . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

5.1.1 Extension to Multiagent Systems . . . . . . . . . . . . . . . . . . 83

5.2 Adaptive Solution Methods . . . . . . . . . . . . . . . . . . . . . . . . . 83

5.3 Experimental Environments . . . . . . . . . . . . . . . . . . . . . . . . . 86

5.3.1 Robotic Soccer as a Framework for Cooperative Robotics . . . . 86

5.3.2 Semi-Autonomous Multiagent Surveillance . . . . . . . . . . . . . 87

5.4 Research Timeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

5.4.1 Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . . 89

5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

ii

Page 7: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

CONTENTS

References 93

iii

Page 8: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

CONTENTS

iv

Page 9: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

List of Figures

2.1 An overview of the topological relationships between relevant MDP-based

model classes, according to their descriptive capabilities. . . . . . . . . . 6

2.2 A 2-TBN representing a MDP. . . . . . . . . . . . . . . . . . . . . . . . 7

2.3 A simple scenario in which multiple events can trigger in the same state.

Events, here corresponding to transitions between states, are represented

by arrows and labeled es,s′ . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.4 A DBN representing a GSMDP. Since the system is non-Markovian, it

cannot be described through a 2-TBN. . . . . . . . . . . . . . . . . . . . 16

2.5 An example of the practical difference in the definition of events, between

a) the original GMSP formulation of Younes and Simmons (2004), and

b) through Definition 2.2.2 used in this work. . . . . . . . . . . . . . . 17

2.6 A 2-TBN representing a POMDP. . . . . . . . . . . . . . . . . . . . . . . 19

2.7 The linear supports of a value function as a convex partition of B. . . . 21

2.8 A 2-TBN representing a two-agent (d = 2) Dec-POMDP. . . . . . . . . . 24

2.9 A 2-TBN representing an example of a factored two-agent Dec-POMDP. 29

3.1 Two possible state discretizations for a robotic soccer environment: (a)

A grid-like metric discretization, typical for navigation problems. The

actions of a robot may move it to adjacent cells; (b) A discretization

based on logical predicates, useful for high-level decision making. Here,

states take a more logical interpretation, such as “being in either side of

the field” (states 1 and 2), or “being near the goal” (state 3). Actions

may be seen as complex behaviors for a robot. The represented action

attempts to move the robot closer to the goal. However, the Markovian

property of the system may be broken. . . . . . . . . . . . . . . . . . . . 40

v

Page 10: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

LIST OF FIGURES

3.2 Diagram showing the different possible local states the agent may be

in. Note that they are not independent, since for one agent to be the

Attacker the other must be the Supporter and vice-versa. . . . . . . . . 45

3.3 Synchronization timeline with instantaneous communication. . . . . . . 51

3.4 Synchronization timeline with delayed communication of observations. . 52

3.5 Convergence of the Perseus algorithm for the proposed Dec-POMDP. . . 53

3.6 Expected reward collected from simulating the proposed task with in-

stantaneous communication (top) and one-step delay (bottom). . . . . . 54

3.7 Behavior of the robots after scoring a goal. The positions of the initially

attacking and supporting robots are shown by a filled and dashed line,

respectively. The position of the ball is marked in gray. The goal is

located at (9,0) and is 2m wide. t1, t2, t3, t4 are sequential time instants. 55

3.8 Behavior of the robots when passing to avoid obstacles. The initial at-

tacker, in this case, is shown by the dashed line. At t2 a pass occurs. . . 56

4.1 The Relay problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

4.2 The linear supports of the optimal stationary joint value function for the

Relay problem, colored according to the respective local actions of agent 1. 64

4.3 The projections of the linear supports of the joint value function onto the

belief factor over the left room. . . . . . . . . . . . . . . . . . . . . . . . 65

4.4 (a) Layout of the Relay-Small problem. (b) Layout of the Relay-Large

problem. (c) Communication map for the Relay-Small problem. . . . . . 72

4.5 Value bounds for the Relay-Small problem. The dashed lines indicate

the minimum value bounds, and the filled lines represent the maximum

value bounds, for each action. . . . . . . . . . . . . . . . . . . . . . . . . 73

4.6 Representation of the OneDoor scenario. . . . . . . . . . . . . . . . . . . 74

5.1 A robotic navigation task in which actions are assumed to have static

termination conditions, and a metric grid discretization is used for the

state space. The amount of time or distance which is assumed for each

action and the resolution of the state space jointly affect the quality of

the resulting policy and the computational complexity of its calculation.

Left: A discretization with |S| = 13, and |π∗| = 8; Right: A discretization

with |S| = 43 and |π∗| = 12. . . . . . . . . . . . . . . . . . . . . . . . . . 78

vi

Page 11: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

LIST OF FIGURES

5.2 Modeling the navigation task (left) as a Discrete-Event System (Finite

State Automaton, right). In the latter case, nodes correspond to the

labeled areas of the environment (states), and arcs represent the possible

transitions between states. . . . . . . . . . . . . . . . . . . . . . . . . . . 79

5.3 The application of a RRT-based algorithm to the previously described

navigation task, as an alternative to the discretization of the configura-

tion space of a robotic agent. . . . . . . . . . . . . . . . . . . . . . . . . 81

5.4 A tentative framework for an Event-Driven POMDP solver. It would

extend a regular POMDP solver by providing it with dynamic transi-

tion probabilities, which would be supplied by underlying, task specific

algorithms. The example considers an RRT planner for a navigation task. 81

5.5 Collecting a history of outcomes of a given task, including initial and

final postures, the resulting event, and elapsed time. . . . . . . . . . . . 84

5.6 An algorithm to adapt the stochastic models of a POSMDP, given data

defined over a continuous environment. In this process, K represents the

optimal number of clusters. . . . . . . . . . . . . . . . . . . . . . . . . . 85

5.7 The SocRob robotic soccer team. . . . . . . . . . . . . . . . . . . . . . . 86

5.8 A multiagent surveillance system. Agents can either be stationary (e.g.

cameras) or mobile with the ability of directly affecting the environment

(e.g. autonomous robots). Observations are shared, and decisions are

taken in a centralized fashion. . . . . . . . . . . . . . . . . . . . . . . . . 87

5.9 An overview of a camera network installed at ISR facilities, which will

serve as a testbed for the methods developed in the scope of the MAIS-S

project. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

5.10 Gantt chart for the proposed timeline. . . . . . . . . . . . . . . . . . . . 90

vii

Page 12: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

LIST OF FIGURES

viii

Page 13: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

List of Tables

4.1 Results of the proposed method for various environments. For settings

assuming full and reduced communication, results show empirical control

quality, online communication usage. . . . . . . . . . . . . . . . . . . . . 73

4.2 Running time (in seconds) of the proposed method in comparison to the

Perseus point-based POMDP solver. . . . . . . . . . . . . . . . . . . . . 74

5.1 Proposed research timeline. . . . . . . . . . . . . . . . . . . . . . . . . . 90

ix

Page 14: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

LIST OF TABLES

x

Page 15: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

Chapter 1

Introduction

1.1 Motivation: Planning Under Uncertainty in the Real

World

Planning and decision-making are fundamental processes in many human activities.

The field of Operations Research (OR) provides mathematical basis to quantitatively

evaluate decisions, allowing the best plans to be identified, in terms of their perceived

utility or efficiency in accomplishing given goals. Many of the techniques developed in

this field are therefore applicable to the domain of Artificial Intelligence (AI), in that

they may provide decision-making capaibilities to autonomous agents. One the areas

of research which is common to both AI and OR is that of Decision-Theoretic (DT)

planning, which focuses on decision-making under uncertainty.

DT techniques address environments in which it is necessary to take into account

uncertainty in the actions and/or observations of an agent, as is the case in many real

world scenarios. A Markov Decision Process (MDP) is a widely known and well-studied

mathematical framework to model problems where the outcome of an agent’s actions

is probabilistic, but knowledge of the agent’s state is assumed(Bellman, 1957a). For

this type of problems, several algorithms exist that provide optimal and approximate

solutions to MDPs in reasonable time.

When the agent’s knowledge is insufficient to directly determine its state, for ex-

ample a mobile robot with noisy sensors, then the uncertainty of its observations must

also be considered. Such problems are where the Partially Observable Markov Decision

Process (POMDP) framework finds its domain of application. Planning under uncer-

1

Page 16: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

1. INTRODUCTION

tainty for a single agent using POMDPs has been the subject of active research for the

last decade (Cassandra, 1998a; Cheng, 1988). While solving a POMDP optimally in

its general case is a difficult problem, various algorithms exist to compute approximate

solutions to moderately-sized POMDPs (Kurniawati et al., 2008; Pineau et al., 2003;

Spaan and Vlassis, 2005) that may find some application in real-world scenarios.

In certain applications, however, a single agent is not enough to model the full scale

of the problem. Such is the case, for example, in cooperative robotics, where multiple

agents must work together to achieve a common goal. The Decentralized POMDP

framework is a natural extension of the POMDP paradigm for multiple agents. In this

type of model, not only do the agents need to consider the uncertainty of their own

actions and observations, but also that of their partners, which may (or may not) be

aided by the use of communication (Roth et al., 2005a). Naturally, being more general,

this type of models is also harder to solve than their POMDP and MDP counterparts. In

fact, optimally solving a general Dec-POMDP is provably intractable (Bernstein et al.,

2000; Goldman and Zilberstein, 2004). However, approximate solutions can be found

(Oliehoek et al., 2008a), even if the computational complexity of the current existing

algorithms restricts their application to small-scale problems. This same problem limits

their usability in real world scenarios, and so far their application has been restricted to

theorical experiments and small simulated examples (Goldman and Zilberstein, 2004).

Being the most general form of an MDP, the term Dec-POMDP is used somewhat

loosely, and special classes of this model include Multiple-Agent MDPs and POMDPs

(MMDPs) and decentralized MDPs (Dec-MDPs) which are easier to solve while still

retaining some of the functionality of Dec-POMDPs (Goldman and Zilberstein, 2004).

This work addresses the problem of modeling real-world multiagent decision-making

tasks through techniques grounded on the theory of MDPs. By doing so, it will attempt

to bridge the apparent gap between theory and practice in this field of study, and draw

further interest into the application of decision-theoretic frameworks.

The applications which will act as case studies concern networked robot systems,

although, as it will be shown, other multiagent scenarios may form suitable domains for

the application of the techniques which are here discussed.

2

Page 17: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

1.2 Contributions

1.2 Contributions

In its current state, this work characterizes the difficulties involved in the implementa-

tion of decision-theoretic frameworks in real world cooperative robotics tasks.

It also proposes a novel approach to achieve efficient communication by exploring

redundancy in the joint policies of multiagent teams.

The final goal of this work is to contribute systematic procedures to the modeling

and solution of decentralized decision making tasks for real teams of robots.

1.3 Outline

The organization of this document is as follows: Chapter 2 introduces the necessary

theoretical background on Markov Decisions Processes and associated models, which

will act as a basis for the remainder of the work. Chapter 3 reviews relevant past

work in this field, with particular emphasis on real-world applications of decision theo-

retic frameworks. It then presents an example of a cooperative robotics task, modeled

through DT methods, and tested in a realistically simulated environment, which was

developed and published in the scope of this work (Messias et al., 2010). This example

explores the limitations of these frameworks in the domain of cooperative robotics, and

identifies the main issues to be addressed when considering real teams of robots. A

solution to one of these issues, that of efficient communication in partially observable

multiagent problems, is discussed in Chapter 4, further describing earlier work by the

author (Messias et al., 2011). Future research directions are proposed in Chapter 5, as

part of the scope of this work, along with concluding comments on the work which has

been performed so far, and the feasibility of the future proposed approaches.

3

Page 18: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

1. INTRODUCTION

4

Page 19: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

Chapter 2

Background

The work developed in this thesis is based on the theory of Markov Decision Processes

(MDPs) (Bellman, 1957a). This well-studied mathematical framework provides a wide

array of tools for the formulation, and solution, of decision-making problems in stochas-

tic environments. In their original form, MDPs can only model single agent decision

processes, where full knowledge of the state of the associated environment is assumed,

and where the system is thought to evolve in discrete time. To address these limitations,

various important extensions to the MDP framework have been introduced, which, as

a whole, allow a broad class of realistic multiagent problems to be modeled. A given

task may be categorized as belonging to one of these model classes depending on its

particular requirements, and the different simplifying assumptions which are made on

the nature of the problem.

The main objective of this work is to efficiently model, and solve, real-world multi-

agent problems in stochastic environments, with incomplete or noisy information. To

achieve this purpose, various extensions to the MDP framework need to be taken into

consideration, each of them posessing desirable properties.

In this section, and in order to make the necessary definitions clear, the MDP

framework is briefly reviewed (Section 2.1), and the associated extensions which are most

relevant to this work are described, according to each of three fundamental properties

which will later be exploited: the ability to model time-driven dynamics (Section 2.2);

partial observability (Section 2.3); and multiagent decision making (Section 2.4).

The notation used in this chapter is mostly consistent with that of basic probability

theory (Jazwinski, 1970). The non-metric sets used in the various modeling frameworks

5

Page 20: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2. BACKGROUND

Continuous Time Modeling Partial Observability

Decentralization

MDP

SMDP

CTM

DP

GSM

DP

PO

SMDP

PO

MDP

MO

MDP

TD-POMDP

Dec-MDP

Dec-POMDP

Figure 2.1: An overview of the topological relationships between relevant MDP-based

model classes, according to their descriptive capabilities.

are assumed to be representable as countable unions of singleton elements, unless oth-

erwise stated, and to be equipped with an appropriate discrete topology. For any set X

over which a probability measure is required, an adequate probability space 〈X,Σ,Pr〉

is assumed to exist in which X is the default space of possible outcomes, Σ is the Borel

algebra generated by the elements of X, and Pr(A) is the probability measure of event

A ∈ Σ. Under this terminology, Ξ(X) refers to the family of all possible probability

distributions over X. The power set of X will be represented as P(X), and its cardi-

nality as |X|. The cartesian product of sets X and Y will be shown as X × Y , and as∏n

i=1Xi for the n-ary case.

2.1 Markov Decision Processes

The MDP framework is particularly suited to modeling problems where the outcome of

the actions of an agent is probabilistic, but knowledge of the state of the process can be

known, without uncertainty, at any given time. MDPs were introduced in the context

of Operations Research (Bellman, 1957a), and have since found their way into diverse

practical single-agent applications (further discussed in Section 3.1). The framework

forms a basis for many reinforcement learning techniques (Kaelbling et al., 1996; Sutton

6

Page 21: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2.1 Markov Decision Processes

sn sn+1

an

Figure 2.2: A 2-TBN representing a MDP.

and Barto, 1998). Formally, a MDP is defined as follows:

Definition 2.1.1. A Markov Decision Process (MDP) is a tuple 〈S,A, T, R, s0, h〉,

where:

S is the state space, a discrete set of possibilities for the state s of the process;

A is the action space, the set of actions a that the agent can perform;

T : S × S × A → [0, 1] is a transition function, such that T (s′, s, a) = Pr (s′|s, a). It

models the stochasticity in the actions of the agent;

R : S ×A → R is the reward function. The value R(s, a) represents the “reward” that

an agent receives for performing action a in state s, representing either its utility

or cost.

s0 ∈ S is the initial state of the process;

h is the planning (or decision) horizon. It defines the number of future steps in the

system that the agent must account for, when planning.

The reward function is sometimes represented as R(s′, s, a), making it depend also

on the resulting state s′, but an equivalent formulation R(s, a) is always possible (Cas-

sandra, 1998a; Oliehoek, 2010);

Given this definition, and in looser terms, a Markov Decision Process represents

a system in which, at steps n = {0, . . . , h}, an action a is taken by an agent, which

causes the system’s state to change stochastically from s to s′, according to T (s′, s, a).

This behavior can be graphically represented as a two-slice Temporal Bayesian Network

(2-TBN), which captures the probabilistic dependency between variables at different

steps in the process, as shown in Figure 2.2. For an introduction to Dynamic Bayesian

Networks, the reader is referred to Murphy (2002). In such a system, the Markov

7

Page 22: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2. BACKGROUND

property is held, i.e., the distribution over the future state of the system depends only

upon the present state and action, at any given decision step. Equivalently:

Pr(sn+1 | sn, an, sn−1, an−1, . . . , s0, a0) = Pr(sn+1|sn, an) (2.1)

Although less common, there are also variations of this type of model which consider

continuous state and/or action spaces (Antos et al., 2007; Hauskrecht and Kveton, 2003).

2.1.1 Finding Agent Policies in MDPs

The objective of a planning task in any MDP-based model is to generate a sequence of

actions {a1, a2, . . . , ah} which attempts to maximize the accumulated reward over the

horizon of the problem. In the case of a standard MDP this typically amounts to the

problem of obtaining a set of decision rules δ(s) : S → A, or in other words, a map

of system states to reward-maximizing actions. The set π(s) = {δ1(s), δ2(s), . . . , δh(s)}

of decision rules for every time step constitutes an h-horizon policy for the agent. If

h is infinite, then δ1(s) = δ2(s) = . . . = δh(s) = π(s). Such a policy is said to be

stationary, or time-invariant. Policies can also be stochastic, if decision rules instead

map to probabilities over actions, i.e. δ(s) : S → Ξ(A). Most methods presented in this

chapter assume policies to be deterministic, since this is the prevalent case in existing

MDP theory. There is a wide amount of well-established literature on the problem of

obtaining optimal, as well as approximate, policies for MDPs (Feinberg and Shwartz,

2002; Puterman, 1994). The most well-known of these will be outlined in this section,

which will provide an insight into the necessary differences, and eventual similarities,

between solution methods for MDPs and those of its associated extensions which will

be introduced throughout this chapter.

Since the future states of an MDP cannot be predicted with full certainty, a common

measure of the quality of a solution, which most solvers try to optimize, is the expected

discounted reward over future states:

{h∑

n=0

γnR(sn, δn(sn))

}

, (2.2)

where Eπ {·} is the expectation operator, considering that the agent is following policy π.

The discount factor γ ∈ [0, 1] allows the series to converge in the case of an infinite-

horizon problem, and is usually 1 for a finite-horizon case. The above quantity is

8

Page 23: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2.1 Markov Decision Processes

referred to as the value of a given state according to π, V π(s). The complete mapping

V π : S → R constitutes a value function for the problem.

One of the most ubiquitous solution methods for discrete MDPs is the value iteration

algorithm (Bellman, 1957b). It recursively updates the value function through the

Bellman backup equation, also known as the Dynamic Programming (DP) approach:

V πn (s) = R(s, δn(s)) + γ

s′∈S

T (s′, s, δn(s))Vπn+1(s

′) (2.3)

For infinite-horizon problems, this can be slightly modified to provide the optimal value

function V ∗(s):

V ∗(s) = maxa∈A

{

R(s, a) + γ∑

s′∈S

T (s′, s, a)V ∗(s′)

}

(2.4)

Equation (2.4) is guaranteed to converge to a single invariant point, which corre-

sponds to the optimal value function for the problem (Puterman, 1994). For finite

horizon problems, only h backups are needed to obtain the optimal value function, but

each intermediate result must be stored. If h is infinite, its value function is stationary,

so only the final result is necessary. Optimality in this context means that, starting from

state s, there exists no sequence of actions of length h associated with expected reward

higher than V ∗h (s). The optimal policy π∗(s) can be easily extracted from the problem’s

optimal value function, by simply choosing, in each state, the action associated with

the highest expected value:

π∗(s) = argmaxa∈A

{

R(s, a) + γ∑

s′∈S

T (s′, s, a)V ∗(s′)

}

(2.5)

A different approach to this problem, known as policy iteration, instead operates in

the space of possible policies. It alternatingly calculates the expected value associated

with a given (possibly non-optimal) policy, and improves that policy by changing its

first action, when possible (Sutton and Barto, 1998). This method is also guaranteed to

converge towards the optimal policy, but, in the worst case (which seldomly occurs for

non-degenerate scenarios), it must iterate through an exponential number of policies.

Even though the state of the system is available to the agent at run-time, if the

system dynamics are known (i.e. the matrix T ), then the reward R(s, a) received

for a given state-action pair does not need to be observed. Planning is then said to

9

Page 24: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2. BACKGROUND

be done “offline”, symbolizing that policy execution is not required in order to solve

the decision-making problem. When the stochastic models of the environment are not

known a priori, solving an MDP becomes a reinforcement learning problem. In this type

of problem, the reward collected by the agent as it interacts with the environment is also

observed at run-time, or “online”, either immediately associated with a given state and

action, or after the episodic execution of a given policy. A common way of addressing

this problem is through the use temporal difference learning (or TD-learning) methods

and its closely related adaptations, which estimate the value function, and improve the

associated policy accordingly, using the immediately collected reward as an input. The

most basic form of temporal difference learning is implemented through the so-called

TD(0) equation:

V (s) = (1− α)V (s) + α(R(s, a) + γV (s′)

), (2.6)

where α ∈ [0, 1] is the learning rate or step-size, which in practice acts an exponential

smoothing over estimated values. These methods rely on the property that R(s, a) +

γV (s′) approaches V (s) as the value function converges towards its invariant point.

They are in fact instantiations of a more general interpretation of the policy iteration

algorithm. However, for these methods to converge to the global optimum, sufficient

exploration of state-action pairs is assumed (i.e. each pair must be visited a number of

times). To this end, reinforcement learning methods enforce some level of stochasticity

in the resulting agent policy, which leads the agent to select random actions if the state

space must be further explored.

TD-learning methods can be classified as being on-policy, if they use the current

estimate of the agent’s policy to predict upcoming actions, or as off-policy methods,

which typically use a greedy approach to action prediction. A very common off-policy

reinforcement learning method based on temporal differences is Q-learning (Watkins,

1989), which updates the optimal value of state-action pairs (s, a) as follows:

Q(s, a) = (1− α)Q(s, a) + α

(

R(s, a) + γmaxa′∈A

Q(s′, a′)

)

. (2.7)

Common on-policy variations include TD(λ), SARSA, and actor-critic methods

(Kaelbling et al., 1996; Sutton, 1984; Sutton and Barto, 1998).

10

Page 25: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2.2 Extensions for Continuous-Time Problems

2.2 Extensions for Continuous-Time Problems

The concept of time in a MDP is an abstract one, from the perspective of the planning

problem. In a MDP, a decision is taken whenever an action is considered to have finished,

or equivalently when a state transition is known to have occurred. The “time” index

in much of the associated theory more accurately represents the number of sequential

decision “epochs”, or episodes, which have elapsed, or the remaining number of such

decisions left to take. It should then be made clear that decisions are not actually

bound to any specific time instant, or interval, associated with the execution of the

underlying system, and rather with state transitions which are triggered by actions.

A typical approach to the control of continuous-time systems (such as robotics ap-

plications) through discrete decision theoretical models, is to assume that either the

elapsed time or the travelled distance since the last decision are fixed (Little, 1955;

Matarić, 1997) (the former is also a popular approach to maintain synchronous decision

making in multiagent scenarios) . This type of termination condition will be referred

to, in this work, as being a static termination, as a reference to the fact that these

conditions are predetermined for each starting state and action, and are independent

of other effects occurring in the environment. This notion is generalized by dynamic

termination conditions, in which an action is carried out until a logical condition is met,

such as the occurrence of an observable event.

In many real-world scenarios, the physical time which elapses in the system is rel-

evant to the decision-making process. This is the case, for example, of queueing and

maintenance systems (Puterman, 1994), control of epidemic processes and population

modeling (Guo and Hernández-Lerma, 2009), dynamic power management systems

(Rong and Pedram, 2003), and also in robotics problems (Mahadevan and Khaleeli,

1999), which are of special interest to this work. In Section 5.1, it will be proposed that

continuous-time modeling can be leveraged in order to simplify planning in common

robotics tasks.

2.2.1 Semi-Markov Decision Processes

A Semi-Markov Decision Process (SMDP) (de Cani, 1964; Howard, 1963; Jewell, 1963)

extends the basic MDP framework by allowing the underlying system dynamics to also

depend on the time since the last decision was carried out. Since the model dynamics

11

Page 26: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2. BACKGROUND

will then require memory of the instant in wich a particular state was entered, the

system does not strictly maintain the Markov property within each state. However,

after a state transition takes place, all memory is discarded, and the future state of the

system can still be predicted based only on present information. This is why such a

system is said to be Semi-Markov.

Definition 2.2.1.

A Semi-Markov Decision Process (SMDP) is a tuple 〈S,A, T,F , R, C, s0, h〉, where:

S,A, T, R, s0, h have the same definition as in a Markov Decision Process (see Defini-

tion 2.1.1);

F ={

fas,s′ : R

+0 → R |

∫ τ2τ1

fas,s′(t)dt = Pr(τ1 ≤ t ≤ τ2 | s

′, s, a) , ∀s, s′ ∈ S, a ∈ A}

is

the time model, a set of probability density functions over the instant of the

next decision, which are mapped by state transition and action tuples 〈s′, s, a〉.

F as,s′(T ) = Pr(t ≤ T | s′, s, a) represents the respective cumulative distribution

function for each of these tuples.

C : S × A → R is the cumulative reward rate which, in addition to the instanta-

neous reward model R, allows the modeling of an utility or cost associated with

the sojourn time at s while executing a.

Equipped with a time model, a SMDP can describe, for every state and action, and

possibly depending on the resulting future state, a probability density function over the

specific time instant of the next decision. This usually corresponds to the occurrence

of suitably defined events which are known to correspond to changes in the state of the

system. This establishes a close parallelism between SMDPs and areas of the theory of

Discrete-Event Systems (Cassandras and Lafortune, 1999) which explicitly model time,

such as Stochastic Timed Automata (STA) (Neto, 2010) and Generalized Stochastic

Petri Nets (GSPNs) (Murata, 1989). However, in contrast to these frameworks, a general

SMDP allows the state of the system to change asynchronously with the decisions taken

by the agent. The system state can in fact change multiple times between decisions

(Mahadevan and Khaleeli, 1999; Puterman, 1994). This work will restrict its focus to

SMDPs in which decisions are taken upon the occurrence of any state transition, which

will allow a later correspondence between decisions and events.

12

Page 27: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2.2 Extensions for Continuous-Time Problems

In this case, and with the definition of the cumulative reward model C, the total

reward for a state-action pair in a SMDP is given by:

U(s, a) = R(s, a) + Eτ

{∫ τ

0

e−λtC(s, a)dt

}

(2.8)

= R(s, a) +∑

s′∈S

∫ ∞

0

p(τ, s′ | s, a)

(∫ τ

0

e−λtC(s, a)dt

)

dτ , (2.9)

where p(τ, s′ | s, a) represents the mixed continuous-discrete joint distribution over re-

sulting states and decision instants. The last term in (2.8) corresponds to the expecta-

tion of the reward accumulated through C, over the time τ between decision instants,

and discounted over time by an exponential parameter. Note that the discount factor

λ, limited to the interval [0,∞], serves the same purpose as γ in a discounted MDP.

Through a simple application of Bayes’ Theorem:

p(τ, s′ | s, a) = p(τ | s′, s, a)Pr(s′ | s, a) = fas,s′(τ)T (s

′, s, a) (2.10)

This implies that, in a SMDP, T (s′, s, a) represents the state transition probabilities only

at decision instants, and is not influenced by time. It can be thought of as describing an

“embedded” discrete-time MDP at these instants (Puterman, 1994). The total reward

can then be simplified as:

U(s, a) = R(s, a) +∑

s′∈S

T (s′, s, a)

∫ ∞

0

(

fas,s′(τ)

∫ τ

0

e−λtC(s, a)dt

)

dτ (2.11)

= R(s, a) +1

λC(s, a)

s′∈S

T (s′, s, a)

(

1−

∫ ∞

0

fas,s′(τ)e

−λτdτ

)

(2.12)

= R(s, a) +1

λC(s, a)

s′∈S

T (s′, s, a)(1− L

{fas,s′(τ)

}), (2.13)

where L{f(t)} denotes the Laplace transform of f(t), as used in de Cani (1964); Jewell

(1963).

The value of executing a stationary policy in a SMDP is then, for each state:

V π(s) = U(s, π(s)) +∑

s′∈S

V π(s′)

∫ ∞

0

Pr(t, s′ | s, a)e−λtdt (2.14)

= U(s, π(s)) +∑

s′∈S

T (s′, s, a)V π(s′)

∫ ∞

0

fas,s′(t)e

−λtdt (2.15)

= U(s, π(s)) +∑

s′∈S

L{fas,s′(t)}T (s

′, s, a)V π(s′) (2.16)

13

Page 28: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2. BACKGROUND

Equation (2.16) allows a SMDP to be viewed as an equivalent discrete-time MDP

with state-action dependent discount rates γ(s, a) = L{fas,s′(t)} (Bradtke and Duff,

1995; Puterman, 1994). This, in turn, forms a very positive practical result for SMDPs,

since virtually all solution algorithms for discrete MDPs can also be applied to this

class of models. Well-established reinforcement learning algorithms, such as TD(λ) and

Q-learning, have been shown to work with SMDPs in Bradtke and Duff (1995).

One limitation of SMDPs is that, since the system loses all information regarding

past states and actions upon reaching a new state, the time distribution associated with

each transition must lose all memory accordingly. The following example illustrates this

problem:

Example 2.2.1. Suppose that a robot operates in an environment with two rooms,

represented by states L and R, and is able to move from one room to the other with

fixed probability, according to a single action a. The robot can run out of battery, at

which point it will enter the absorbing state N . The problem can be modeled as the

discrete MDP represented in Figure 2.3.

L R N

eL,R

eR,L

eR,N

eL,N

eL,L

eR,R

Figure 2.3: A simple scenario in which multiple events can trigger in the same state.

Events, here corresponding to transitions between states, are represented by arrows and

labeled es,s′ .

The flaw of this model is that events are assumed to have a static probability of

occurrence. That is to say, the probability of running out of battery in states L, R is

the same regardless of how many decision steps have elapsed. In a Semi-Markov model

of this problem, even though abritrary temporal distributions could be assigned to each

event, the same memoryless property would have to be enforced at each decision step.

This would erroneously represent that the expected lifetime of the battery of robot would

remain constant throughout execution. The event of running out of battery is said to

beconcurrent with all others, in that it remains enabled despite the triggering of other

events. The only way through which this problem could be modeled as a (S)MDP would

be discretize the battery levels of the robot, and extend the state with that information.

14

Page 29: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2.2 Extensions for Continuous-Time Problems

2.2.2 Continuous-Time Markov Decision Processes

Another class of models closely related to SMDPs is that of Continuous-Time Markov

Decision Processes (CTMDPs). CTMDPs are defined by the same constructs as a

SMDP, and can in fact be viewed as special cases of the latter class, in that they as-

sume that all distributions described in the time model fas,s′ (refer to Definition 2.2.1)

are exponential in nature. Due to the memoryless property of these distributions, in

this case, the system remains fully Markovian. This simplification allows CTMDPs to

handle event concurrency with greater efficiency. This is particularly useful in model-

ing factored (see Section 2.4.3) or multi-agent scenarios (Younes and Simmons, 2004).

However, the assumption of exponential time models for every transition can be of

limited validity in practice. CTMDPs are suitable, for example, for arrival-rate prob-

lems in queueing systems, and population modeling with limited resources (Guo and

Hernández-Lerma, 2009), in which all events are known to be generated by underlying

Poisson processes. Any CTMDP can be transformed into an equivalent discrete-time

MDP through a process of uniformization (Neto, 2010; Puterman, 1994), which first

provides an equivalent, oversampled continuous time model with equal decay rates for

every transition, and then calculates an appropriate discount factor that accounts for

the time in each transition. As in the case of SMDPs, this allows CTMDPs to be solved

through virtually any MDP solution algorithm.

2.2.3 Generalized Semi-Markov Decision Processes

In an effort to generalize the asynchronous property of CTMDPs to systems with ar-

bitrary time distributions, the Generalized Semi-Markov Decision Process (GSMDP)

model was introduced in Younes and Simmons (2004). This constititutes an exten-

sion of the earlier Generalized Semi-Markov Process (GSMP) formalism, introduced

in Glynn (1989) to address the same problem in uncontrolled systems. In GSMDPs

(and GSMPs), the occurrence of an event in the system does not necessarily erase all

information of past events, which might have influenced the time models.

Definition 2.2.2. A Generalized Semi-Markov Decision Process (GSMDP) is a tuple

〈S,A, T,F , R, C, E , s0, h〉 where:

S,A, R, C, s0, h have the same definition as in a Semi-Markov Decision Process (see

Definition 2.2.1);

15

Page 30: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2. BACKGROUND

s0 s1

a0

e0

a1

e1

sn

an

en

t

Figure 2.4: A DBN representing a GSMDP. Since the system is non-Markovian, it cannot

be described through a 2-TBN.

E is a set of events e which may occur in the system at decision instants. For each state-

action tuple 〈s, a〉, there exists a set of enabled events Es,a = {e ∈ E |Pr(e | s, a) > 0}.

T : S×S×E → [0, 1] represents state transition probabilities given events, T (s′, s, e) =

Pr(s′ | s, e), ∀s ∈ S, e ∈ E .

F holds the same meaning as in a SMDP, but it is instead mapped by events, i.e.

F ={

fes′ : R

+0 → R |

∫ τ2τ1

fes′(t)dt = Pr(τ1 ≤ t ≤ τ2 | s

′, e) , ∀s′ ∈ S, e ∈ E}

.

Since the main purpose of GSMDPs is to resolve event concurrency, a formal defi-

nition of event becomes necessary, and is here introduced into the model. It should be

noted that the definition which is here used is different from that of Younes and Sim-

mons (2004), in which events are considered as (possibly uncontrollable) abstractions

to actions, enabled according to the system state. In this work, events are regarded as

abstractions to state transitions, which in turn may be enabled by certain state-action

pairs (see Example 2.2.2). Equivalent models can be derived under either definition, so

this does not result in any loss of generality. This will allow for easier extension of these

models to more realistic scenarios in which events must be observed with an associated

uncertainty.

Example 2.2.2. Consider an environment with states S = {I,G,B}, shown in Figure

2.5. A robot is located in initial state I, tasked with the goal of reaching G. The actions

available to the robot are to either Move (a1) or Idle (a2). By performing a1, there is a

non-zero probability of reaching G, and also of becoming blocked by an obstacle, conse-

quently entering B. Performing a2 always leads back to I with certainty. Through the

16

Page 31: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2.2 Extensions for Continuous-Time Problems

original GSMDP definition (Younes and Simmons, 2004), this system can be modelled

using two controllable events, which in this case correspond to the available actions, so

that E = {e1, e2} ≡ A. Both events are enabled at I. The model T ({G,B}, I, e1) will

then describe the probability of either reaching G or B, conditioned on the occurrence

of e1. Each of these transitions can have its own time model fe1{G,B}(t).

I G

B

e1 ≡ a1

e1 ≡ a1

e2 ≡ a2

(a) Original definition.

I G

B

e1

e2

e3

(b) Definition 2.2.2.

Figure 2.5: An example of the practical difference in the definition of events, between a)

the original GMSP formulation of Younes and Simmons (2004), and b) through Definition

2.2.2 used in this work.

However, from the perspective of the robot, and considering the use of its sensing

capabilities, becoming blocked by an obstacle is intuitively regarded as a different sen-

sorial event than reaching the goal state. This motivates viewing all events as possible

consequences of actions, rather than as actions themselves. The definition which is

here introduced (Definition 2.2.2) regards all events as exogeneous when conditioned

on the execution of an action. This leads to an alternative model (Figure 2.5b), with

E = {e1, e2, e3}, and in which EI,a1 = {e1, e2}, EI,a2 = {e3}. This allows for a closer

relationship between event detection, which plays an important role in many robotics

applications (particularly those with incomplete or noisy information), and policy exe-

cution. Furthermore, a theoretical equivalence can be established between models using

either definition, although that is not shown in the current state of this work.

Let ǫn = 〈sn, en, tn〉 represent a “snapshot” of the execution of the system after n

events, containing the current state, the last triggered event, and the time since the

last decision was made. An execution history ~ǫ = {ǫ0, ǫ1, . . . , ǫn} makes it possible

to infer the time which has elapsed since each event became enabled, even if different

events have been meanwhile triggered. This information is explicitly necessary in order

to accomodate for any general non-Markovian distribution fes′ , across multiple decision

instants. As such, a GSMDP allows the actions of the agent to depend on execution

histories, instead of simply on states. Let Ti =∑i

j=0tj . Ultimately, the goal of the

17

Page 32: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2. BACKGROUND

planning problem is to optimize the value function over the complete execution history:

V π(ǫ) =n∑

i=0

e−λTi

(

R(si, π(~ǫ0:i)) +

∫ ti

0

e−λtC(si, π(~ǫ0:i))dt

)

(2.17)

The maximization of (2.17) is a difficult problem, since the underlying dynamics are non-

Markovian, and typical DP approaches cannot be directly applied in such a case. Younes

and Simmons (2004) proposed a method to solve a GSMDP by first approximating

each non-exponential component of F with a phase-type distribution. This implies an

expansion of the state space of the problem, but allows the resulting approximate model

to be representable by a CTMDP. Consequently, and through uniformization, typical

discrete-time MDP methods can be applied. Alternatively, Rachelson et al. (2008) have

recently proposed an approximate reinforcement learning algorithm for GSMDPs based

on policy iteration.

2.3 Extensions for Partially Observable Domains

In many practical scenarios to which MDPs can be applied, it is not unrealistic to as-

sume that the state of the environment can indeed be known by the agent. Such is

the case of many scenarios in Operations Research, where the state can comprise the

number of people in a queue, or the phase of a manufacturing process, for example; and

also for virtual scenarios (Little, 1955; Stidham and Weber, 1993; Tesauro, 1992). How-

ever, whenever a physical agent is required to actively sense its environment, through

whichever means are available to it, this assumption may quickly be violated, since

typically only an estimate of the actual state of the system can be obtained. In other

scenarios, even if the agent can sense its surroundings with great accuracy, it may still

only be observing part of the complete state of the system. The term partial observabil-

ity refers to both of these situations, and establishes a contrast with the fully observable

scenarios so far described, in which the state is directly acessible at any time. The

field of Robotics provides evident examples of scenarios with partially observable envi-

ronments (Mahadevan and Khaleeli, 1999; Miller et al., 2009; Nourbakhsh et al., 1995;

Spaan and Vlassis, 2004; Sridharan et al., 2010). POMDPs can also be viewed as a

controllable form of Hidden Markov Models (HMMs), which are widely used in the field

of Bayesian Inference (Murphy, 2002).

18

Page 33: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2.3 Extensions for Partially Observable Domains

2.3.1 Partially Observable Markov Decision Processes

The framework of Partially Observable Markov Decision Processes (POMDPs) consti-

tutes a conceptually straightfoward extension of MDPs to situations in which the agent

is not only allowed to act upon the environment, but is also able to observe it at each

step of the process. These observations are generated by the environment itself, through

an associated stochastic model, as a response to a given state-action pair.

sn sn+1

an

on on+1

Figure 2.6: A 2-TBN representing a POMDP.

Definition 2.3.1. A Partially Observable Markov Decision Process (POMDP) is a

tuple 〈S,A, T,O, O,R, b0, h〉, where:

S,A, T, R, h have the same definition as in a Markov Decision Process (see Definition

2.1.1);

O is a set of observations o which can be generated by the environment;

O : O×S ×A → [0, 1] is the observation function, such that O(o, s′, a) = Pr (o | s′, a)

for o ∈ O, s′ ∈ S, a ∈ A;

b0 ∈ Ξ(S) is an initial probability distribution over states.

The central difference in the theoretical treatment of POMDPs, with respect to

MDPs, is that, since the state of the system is no longer directly acessible, POMDPs

must instead operate over elements of Ξ(S). These elements are often referred to as

belief states, and represented as b ∈ B, where B ≡ Ξ(S) and is likewise termed as belief

space. If the true state of the system is x, then b(s) = Pr(x = s) at any given step.

19

Page 34: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2. BACKGROUND

2.3.1.1 Finding agent policies in POMDPs

As described through (2.3), a value function of a MDP can be compactly represented as

a finite-dimensional quantity (specifically, a vector in R|S|). In the POMDP case, since

there are infinitely many elements in B, this is no longer possible, and it is therefore

necessary to obtain an alternative representation for the expected reward associated

with any policy π : B → A. This expectation, taken over all steps in the decision

process, takes the form:

{h∑

n=0

γn∑

s∈S

bn(s)R(s, δn(bn))

}

. (2.18)

In turn, the Bellmann equation (2.3) applied to POMDPs becomes:

V ∗n (b) = max

a∈A

{∑

s∈S

b(s)

(

R(s, a) + γ∑

s′∈S

o∈O

T (s′, s, a)O(o, s′, a)V ∗n+1(Φ(b, a, o))

)}

,

(2.19)

where Φ : B × A × O → B is a belief update function which, given an action and a

subsequent observation, univocally maps a belief state at step n to another at step

n+ 1. Specifically, ∀s ∈ S:

b′(s) = Φ(b, a, o)|s =

O(o, s, a)∑

s′∈S

b(s′)T (s′, s, a)

s′′∈S

O(o, s′′, a)∑

s′∈S

b(s′)T (s′, s, a). (2.20)

This function intuitively describes the process of incorporating information from the

actions and observations of an agent into its state estimate.

Equation (2.19) enables the application of the Value Iteration algorithm to POMDP

settings. It can provide an optimal (or in the infinite-horizon case, an approximately

optimal) policy for the agent for any given belief state, but it does not address the

problem of how to actually represent the respective solution as a finite-sized structure,

so that it can be accessed at run-time. It is well known, however, that (2.19) is a

piecewise-linear convex function (Cheng, 1988; Sondik, 1971), and, as such, admits a

compact representation as:

V ∗n (b) = max

α∈Γn

{∑

s∈S

b(s)α(s)

}

, (2.21)

where each α is a |S|-dimensional vector, often simply referred to as an α-vector, and

Γn is the set of all such vectors at step n. Furthermore, a mapping ν : Γn → A exists

20

Page 35: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2.3 Extensions for Partially Observable Domains

which associates each α-vector with an action at that step. Intuitively, an α-vector

encodes, as a linear function of the belief state of the system, the expected reward of

taking action ν(α) and then following the optimal policy for the remaining steps. Using

representation (2.21) allows a value function to be represented at each step by simply

storing the sets Γn. The respective policy can easily be extracted:

δn(b) = ν

(

arg maxα∈Γn

{∑

s∈S

b(s)α(s)

})

(2.22)

An important property of the value function when represented in this way, which will

later be used in this work (see Chapter 4), is that, due to its convexity, each α-vector

defines a convex region of B over which it is maximal. For a finite number of alpha

vectors (this number is bounded above by |A||O|h+1−1

|O|−1 (Cassandra, 1998a)), all regions

generated by Γn are convex polytopes, defined by the following constraints:

s∈S

b(s)(α′(s)− α(s)

)≤ 0 , ∀α′ ∈ Γn |α

′ 6= α (2.23)

s∈S

b(s) = 1 (2.24)

b(s) ≥ 0 , ∀s ∈ S (2.25)

These regions are referred to as the linear support of each α ∈ Γn, or L(α) (Cheng,

1988). This concept is exemplified in Figure 2.7. The linear support of a given α is

non-empty if and only if there is some point b ∈ B for which δn(b) ≡ ν(α), according to

(2.22). All vectors which do not verify this condition may be discarded, since they do

not contribute any information towards the policy of the agent (Cassandra, 1998a).

V

α1

α2 α3

α4

︸ ︷︷ ︸ ︸ ︷︷ ︸ ︸ ︷︷ ︸ ︸ ︷︷ ︸

L(α1) L(α2) L(α3) L(α4)(0, 1) (1, 0)

Figure 2.7: The linear supports of a value function as a convex partition of B.

Attempting to optimally solve a POMDP is a PSPACE-hard problem (Papadim-

itriou and Tsitsiklis, 1987). Classical POMDP algorithms attempt to determine the

21

Page 36: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2. BACKGROUND

minimal set of α-vectors which make up the optimal value function, either by enu-

merating every such possible vector and pruning the result (Monahan, 1982), or by

exploring the structure of the induced linear supports over the belief space (Cassan-

dra, 1998a; Cheng, 1988). Many state-of-the-art POMDP solution algorithms forgo

strict optimality in favor of scalability. A particularly successful class of algorithms is

that of point-based solvers. These solvers optimize expected reward over a finite set of

samples taken from the belief space. This approach takes advantage of the fact that,

from a given initial belief, there is only a finite set of possible (or “reachable”) belief

states, given all possible observations, and the actions selected by the agent at those

points. Therefore, calculating the optimal value function over the complete belief space

often becomes unnecessary. The set of maximal α-vectors, over all belief state samples,

constitutes an approximation to the optimal value function. Examples of this type of

algorithm include PBVI (Pineau et al., 2003), SARSOP (Kurniawati et al., 2008) and

Perseus (Spaan and Vlassis, 2005).

2.3.2 Partially Observable Semi-Markov Decision Processes

The considerations which have been previously made regarding the limitations of the

MDP framework, when dealing with time-dependent problems (Section 2.2), are also

applicable to the case of POMDPs. Likewise, the extensions which have been formu-

lated in order to overcome these shortcomings, for the fully observable case (CTMDPs,

SMDPs and GSMDPs), provide a basis for the definition of analogous counterparts

for partially observable domains. One such readily defined extension, introduced in

(White, 1976), and used in the context of mobile robot navigation (Mahadevan and

Khaleeli, 1999), is the class of Partially Observable Semi-Markov Decision Processes

(POSMDPs). For concreteness, a POSMDP incorporates a time model F , and cumu-

lative reward structure C (see Definition 2.2.1) into a commonly defined POMDP (see

Definition 2.3.1).

A POSMDP can be reduced to its embedded POMDP if it is controlled only at tran-

sition instants (White, 1976), as described in Section 2.2.1 for the SMDP case. This

makes synchronous POSMDPs solvable through ordinary POMDP algorithms (Mahade-

van and Khaleeli, 1999), and implies that all considerations made with respect to partial

observability are equivalent between the two frameworks. However, POSMDPs are also

22

Page 37: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2.4 Extensions for Multiagent Decision-Making Problems

unable to model event concurrency, making them unsuitable for general multiagent

problems, in their original form.

2.4 Extensions for Multiagent Decision-Making Problems

While the various frameworks so far described already allow a broad class of realistic

problems to be modeled, all of them assume that there is a single agent, or a system

which can be viewed as a single agent, involved in the decision-making process. However,

there is a clear interest in being able to apply the aforementioned theory to decision-

making problems in multiagent systems. This is of particular relevance to the field of

Cooperative Robotics, which provides the main case-studies for this work (presented

in Section 5.3), and in which various practical applications of MDP-based models have

been reported (Capitán et al., 2011; Matarić, 1997; Miller et al., 2009; Oliehoek and

Visser, 2006). Other applications of multiagent decision-making include sensor networks

(Nair et al., 2005), and network routing problems in distributed systems (Pajarinen and

Peltonen, 2011; Tao et al., 2001).

Adding multiple agents adds another layer of complexity to the decision-making

problem, since, in the general case, agents will then have to reason not only over their

own actions and observations, but also over those of other agents, taking into account

possible interdependencies.

Decentralized POMDPs (Dec-POMDPs) (Section 2.4.1) constitute the most gen-

eral MDP-based framework with respect to the scope of multiagent scenarios it can

model (refer to Figure 2.1). The relationship between the different multiagent mod-

eling frameworks which are here discussed is more easily grasped if each is viewed as

a more specialized instance of the Dec-POMDP framework. Therefore, contrarily to

the constructive approach taken in the previous sections, the definition of a general

Dec-POMDP is here presented before the description of its relevant sub-classes.

2.4.1 Decentralized Partially Observable Markov Decision Processes

Dec-POMDPs were introduced in Bernstein et al. (2000) as a straightforward extension

of POMDP to scenarios where more than one agent may be acting upon the environ-

ment, and observing some of its features with uncertainty.

23

Page 38: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2. BACKGROUND

In Dec-POMDPs and their respective multiagent-oriented subclasses, it is often

necessary to index model components (e.g. actions, observations) with respect to an

individual agent, and also with respect to the corresponding step in the decision-making

process. In the following discussion, the notation zi,n will be used to refer to a generic

element z of agent i at step n. For simplicity, the step index will be omitted if it is clear

or unspecified in a given context.

Definition 2.4.1. A Decentralized Partially Observable Markov Decision Process (Dec-

POMDP) is a tuple 〈d,S,A, T,O, O,R, b0, h〉, where:

d is the number of agents;

A =d∏

i=1

Ai is a set of joint actions. Ai contains the individual actions of agent i.

Each joint action a ∈ A is a tuple of individual actions 〈a1, a2, . . . , ad〉, where

ai ∈ Ai.

O =d∏

i=1

Oi is a set of joint observations o. Oi contains the individual observations of

agent i. As for actions, O := {〈o1, o2, . . . , od〉|oi ∈ Oi};

S, T, O,R, b0, h have the same definition as in a Partially Observable Markov Decision

Process (see Definition 2.3.1), and are mapped, where appropriate, by joint actions

and joint observations;

a2,n

o2,n+1o2,n

sn sn+1

o1,n+1o1,n

a1,n

Figure 2.8: A 2-TBN representing a two-agent (d = 2) Dec-POMDP.

In this case, the goal of the decision-making problem is to maximize the reward

gathered by the team (since R maps from joint actions and states), so it is assumed that

24

Page 39: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2.4 Extensions for Multiagent Decision-Making Problems

all agents are cooperating towards a team-driven behavior. However, it is very important

to note that, in a general Dec-POMDP, observations are not necessarily shared amongst

the team: in general, each agent i receives only its respective observation oi (see Figure

2.8). This raises an important issue regarding inter-agent coordination, since agents

will have to take actions which jointly maximize collective reward, based only on locally

available information.

The amount of local information which is available to an agent is one of the fun-

damental features which are characteristic to each sub-class of this framework. As a

relevant example, if each agent is allowed to observe part of its environment, but the

composition of all individual observations allows the joint state of the system to be uni-

vocally identified, such that there is a surjective mapping G : O → S, then the system

is said to be jointly fully observable, also known as a Dec-MDP (Bernstein et al., 2000).

Note that for each agent, the problem may still be partially observable from a local

perspective. The presence of communication constitutes another characteristic feature

which further refines each of these model classes (see Section 2.4.2). However, in the

most general setting, nothing is assumed regarding any of these qualities.

If communication is unavailable, this makes it impossible to maintain a joint be-

lief over the global state of the process, since, through (2.20), the observations of all

agents would have to be shared at each timestep. Without such a sufficient statistic

to exploit the Markovian nature of the process, agents are forced to reason over tem-

poral sequences (histories) of actions and observations, each of which might have led

the system to a particular state. Joint policies, in this context, then become maps of

action-observation histories to actions. Furthermore, since, for each agent, both the

actions and observations of all other agents remain unknown, the number of all possible

such histories increases exponentially with the number of decision steps (in particular,

there are |A||O|n histories at step n).

Although optimal Q-value functions have been shown to exist for general Dec-

POMDPs by Oliehoek et al. (2008a), which allow a particular action-observation his-

tory to be valued with respect to the optimal joint policy, the same authors prove

that obtaining an optimal policy based on such a function through DP, as in the case

of (PO)MDPs, is intractable, except for very small-scale problems. Optimally solving

a non-communicative Dec-POMDP is a provably NEXP-complete problem (Bernstein

25

Page 40: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2. BACKGROUND

et al., 2000). However, optimal algorithms for finite-horizon Dec-POMDPs exist, and

act as a benchmark for small-scale problems (Hansen et al., 2004; Spaan et al., 2011).

In order to circumvent the inherent computational intractability of the optimal so-

lution, common Dec-POMDP algorithms approximate the optimal result either through

point-based DP (Seuken and Zilberstein, 2007), a search in the space of possible joint

policies (Szer and Charpillet, 2005), or by representing the problem as one of Expectation-

Maximization, over the parameters of Finite-State Controllers (FSCs) which encode the

joint policy (Kumar and Zilberstein, 2010; Pajarinen and Peltonen, 2011).

Outside of the MDP framework, Dec-POMDPs can be viewed as a subset of a yet

more general mathematical framework developed in the scope of game theory, that of

Partially Observable Stochastic Games (POSGs)(Oliehoek, 2010). In that case, the

reward structure is generalized so that agents may be pursuing individually-defined

rewards and goals, as opposed to being restricted to cooperative behavior. This allows

adversarial decision-making problems to be modeled, which fall outside of the scope of

this work. However, some state-of-the-art solution methods for general Dec-POMDPs

draw upon game-theoretical concepts, such as representing the decision-making process

at any given time as a one-step Bayesian Game (Oliehoek et al., 2008a).

2.4.2 Modeling Communication

In practical multiagent applications, it is often reasonable to assume that some form of

communication can be established between different agents. This is particularly valid

in the domain of cooperative robotics.

The availability of communication raises three additional decision-making problems:

what to communicate; when to communicate; and who to communicate with. In order

to address thse issues, various extensions to the Dec-POMDP framework were proposed,

which introduce additional structures to explicitly model communication decisions and

costs, such as the Dec-POMDP-Com framework (Goldman and Zilberstein, 2004), and

the closely related COM-MTDP framework(Pynadath and Tambe, 2002). However, ob-

taining a solution to these types of models remains NEXP-complete problem, since, even

if agents are able to maintain a joint belief state through communication, overcoming

the need to reason over complete histories of actions and observations, the number of

possible communication decisions to be evaluated would once again increase the com-

plexity of deciding the joint policy (now both an action policy and a communication

26

Page 41: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2.4 Extensions for Multiagent Decision-Making Problems

policy would be required) by an exponential factor. In the worst case, messages can

be too costly for any communication decision to be taken, and so the problem is once

again reduced to a non-communicative scenario. It is also relevant to note that these

classes, even though more convenient, are not more general than Dec-POMDPs in their

descriptive capabilities, since communication actions and costs can be implicitly mod-

eled within the transition, observation and reward models of a given problem (Spaan

et al., 2006).

Goldman and Zilberstein (2004) showed that exchanging observations between agents,

at each time-step, is at least as good as any other inter-agent communication language.

This fact allows problems in which free communication is assumed, and observations are

shared between agents, to be viewed as centralized versions of their respective model

classes. For example, under these circumstances, Dec-MDPs reduce to centralized MDPs

(this had already been presented by Boutilier (1996)), also known as Multiagent MDPs

(MMDPs). In contrast to the NEXP-complete complexity of solving a general Dec-

MDP, the solution to a MMDP a is P-complete problem (Papadimitriou and Tsitsiklis,

1987), since it is the same problem as that of solving a single agent MDP, with as many

individual actions as there are joint actions in the Dec-MDP case.

This difference in computational complexity prompts a different approach to the

problem of modelling communication: if the agents are assumed to communicate freely

with eachother, and exchange observations at each time step; the centralized model

may then be solved at a lower computational cost; finally, communication usage can

minimized a posteriori, by analyzing the resulting joint policy. This approach was

taken by Roth et al. (2007) for Dec-MDPs. However, when dealing with teams of

robotic agents, it is not often reasonable to assume that the state of the system is

jointly-fully observable. It is then more natural to apply this approach to the general

class of Dec-POMDPs, that is, to consider that through free communication, these

models can be regarded as centralized Multiagent POMDPs (MPOMDPs). By doing

so, solving the model becomes a PSPACE-complete problem. This is the approach

taken in Roth et al. (2005b), and also in this work (see Section 3.2.6). The main

disadvantage of using model centralization is that implicit communication is disregarded

(Goldman and Zilberstein, 2004). This is the information which can be extracted from

the dependencies in the actions and observations of different agents. In turn, this

implies that the resulting communication policies may be of inferior quality than if an

27

Page 42: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2. BACKGROUND

explicit approach to communication is used instead. Another relevant issue is that fully

centralized models cannot easily describe communication failures or delays. Spaan et al.

(2008) describe a method to reason over stochastically delayed communication (with up

to one step of delay) in this setting, using Bayesian Games to solve the respective

planning problem.

When considering a centralized, team-wise model, it is advantageous to maintain it

in a factored format (discussed in the following section). Otherwise the dimensions of

the model components may grow exponentially in the number of agents.

2.4.3 Factored Models

All MDP-based frameworks described so far suffer from what has come to be known as

the “curse of dimensionality” - that is to say, the required space and time for the solu-

tion of these models grows exponentially with the cardinality of the model components

(Bellman, 1957b). This greatly impacts the scalability of such models, and, as such, has

led to the development of alternative representations which aim to mitigate this effect.

One such alternative is to obtain a factored representation of the models, in which each

of the components of the system may be described as a combination of characteristic

factors, which are typically exponentially smaller in size (Boutilier and Poole, 1996;

Hansen and Feng, 2000; Poupart, 2005). In this sense, the Dec-POMDP framework, as

described in Definition 2.4.1, already posesses naturally factored action and observation

spaces (an agent-wise factorization). A factored Dec-POMDP may further extend this

property to the state of the system, by introducing a set of k state factors X = ∪ki=1Xi,

so that, in turn, S =∏k

i=1Xi. Each Xi is arbitrarily sized.

Although the Dec-POMDP example is here used, factorization can be applied to

any of the previously described frameworks, without any loss of generality. A graph-

ical structure such as the DBN presented in Figure 2.9 accompanies most factored

representations, which allows the conditional dependencies between the various model

components to be made clear. It also enables the compact representation of the tran-

sition and observation models of the system as sets of local Conditional Probability

Tables (CPTs) (Poupart, 2005), or as Algebraic Decision Diagrams (Hoey et al., 1999),

for further savings in the amount of space required to contain this information. Several

of the currently fastest, most scalable solution algorithms for (PO)MDPs make use of

28

Page 43: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2.4 Extensions for Multiagent Decision-Making Problems

a2,n

x3,n x3,n+1

o2,n+1o2,n

x2,n x2,n+1

x1,n x1,n+1

o1,n+1o1,n

a1,n

Figure 2.9: A 2-TBN representing an example of a factored two-agent Dec-POMDP.

these properties, such as SPUDD (Hoey et al., 1999) and Symbolic Perseus (Poupart,

2005).

The notions of conditional independence expressed by factored models are of par-

ticular relevance in multiagent scenarios. General Dec-POMDPs do not assume any

particular structure in the interaction between agents, but if a factored representation

is possible, it will often highlight the independency between the effects of the actions

of certain agents, or groups of agents, at a given step. This originates the concept of

locality of interaction in multiagent problems (Oliehoek et al., 2008b), which refers to

the fact that agents will typically influence others which are acting in the same “neigh-

borhood”1 but may act independently from distant or unrelated agents, and so will tend

to form local groups of cooperating agents, according to the structure of a particular

scenario. The reward model can also be factored, in this case, by considering the joint

reward gathered by the team, at a given step, as the summation of local reward terms

associated with differents clusters of state factors and agents. This additive property

extends to the value function, and therefore the maximization of each of the involved

terms becomes a simpler problem involving an exponentially smaller number of vari-

1“Neighbouring” agents are commonly thought to be those with concurrent influence arcs in the

2-TBN representing the problem, although this notion can be extended for any number of steps. It

often correlates with, but does not necessarily imply, a spatial relationship between agents.

29

Page 44: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

2. BACKGROUND

ables. These considerations may greatly simplify the problem of calculating a joint

policy in cases where interactions between agents are sparse.

It is important to note, however, that the type of inter-agent (in)dependency so

far described is typically only valid instantaneously, or for very short horizons - as

the system evolves, its factors will rapidly correlate, unless they constitute strictly

independent subprocesses over all steps(Boyen and Koller, 1998). If strict independence

is indeed assumed, then the model can be classified as being Transition-Observation

Independent (TOI). A TOI subclass of Dec-POMDPs is that of Network-Distributed

POMDPs (ND-POMDPs) (Nair et al., 2005). In such a model, the decision problem

is only coupled through the factored reward model (in fact, if the reward model could

also be strictly decoupled across all agents, a TOI scenario could instead be modeled

as separate decision problems from the onset). This assumption allows for greatly

improved scalability, but in practice it often severely limits the modeling capabilities of

the framework. A more general approach is taken by Transition-Decoupled POMDPs

(TD-POMDPs)(Witwicki, 2012), which assumes that the problem can be decoupled for

a particular agent if the policies of other agents are assumed fixed. This allows the

non-concurrent influence of mutually shared state factors by different agents.

Another relevant property of factored models is that, in the general case, the de-

pendencies between the actions of different agents may change depending on certain

variable assignments (here state factors, actions), within a given decision step. If so,

the variables are said to exhibit context-specific independence (Boutilier et al., 1996),

which can be exploited to further reduce the amount of space required for the associated

stochastic models.

30

Page 45: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

Chapter 3

On the Practical Implementation of

MDPs and Related Models

Although MDPs and their generalizations have been shown to form a powerful and

versatile decision-theoretic framework, which can model, and solve, a wide array of AI

applications, the practical implementation of these frameworks in real-world scenarios

is not as widespread as the existing amount of underlying theoretical development could

imply. A common criticism against the use of MDP-based extensions lies in the inherent

computational complexity of obtaining their respective solutions. Although this is not

so significant for classical discrete MDPs, which have been shown to be solvable in

polynomial time (Papadimitriou and Tsitsiklis, 1987), it is particularly valid for their

partially observable and decentralized extensions, which range from being PSPACE-

complete (POMDPs) to being undecidable (infinite-horizon Dec-POMDPs) in terms of

complexity (Bernstein et al., 2000; Papadimitriou and Tsitsiklis, 1987)). This ultimately

impairs the scalability of these models, unless significant structure is present, and it can

be exploited through techniques such as factorization (see Section 2.4.3).

In this chapter, a brief review of existing practical implementations of MDP-based

frameworks will be presented in Section 3.1. Section 3.2 introduces a small-scale multia-

gent scenario, which was developed in the scope of this work as a means of exploring the

current practical limitations of POMDPs, for teams of agents in realistic environments.

A review of different existing theoretical approaches, that attempt to overcome those

limitations, will be presented in context.

31

Page 46: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3. ON THE PRACTICAL IMPLEMENTATION OF MDPS ANDRELATED MODELS

3.1 A Review of MDP-Based Applications

The importance of documenting real-world applications of the MDP framework had

already been recognized as early as the work of White (1985), leading to subsequent

surveys on the topic by that author (White, 1988, 1993). An important distinction is

made in those works, between accounted implementations of MDPs based on real-world

data, which act as a proof-of-concept for the underlying theory, stimulating further

research, but that have gone no further in practice; and those which have provided

decision-making policies that have actually been implemented in industry, finance, or

other activities outside of the research environment, or have at least influenced relevant

decisions made therein. The term “real-world application” is here taken to mean the

latter case. In that sense, and despite an early, apparent lack of applications of the

MDP framework, there was ample evidence of its versatility through the diversity of

areas in which its practical use, over time, had indeed been reported. Notable examples

include:

• The operation of hydroelectric power systems (Lindqvist, 1962; Little, 1955; Wang

and Adams, 1986). The objective of these applications is to control the amount of

water contained in a reservoir, which acts as a source to hydroelectric generators,

such as to optimize the economic efficiency of the plant. The state of the system

represents the level of water in the reservoir, and has an associated (economic)

cost. The inflow to the reservoir is uncertain, since it originates in a natural

water course. This domain of application for MDPs (and closely related stochastic

programming techniques) remains active to this date (Fleten and Kristoffersen,

2008; Pritchard et al., 2005);

• Financial applications, such as managing investments while maintaining an ex-

pected balance, taking into account uncertain expenses and returns (White and

Norman, 1965), or uncertain time to the next investment opportunity (Derman

et al., 1975). The reader is referred to the work of Bäuerle and Rieder (2011) for

recent developments in this field;

• Inventory management (Chao and Manne, 1983; Fabian et al., 1959), a classic

problem in Operations Research. The level of resources in stock is modeled as

32

Page 47: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3.1 A Review of MDP-Based Applications

(part of) the state of the system, subject to changes through demand, and deci-

sions correspond to restocking operations associated with possibly uncertain costs;

• Maintenance and repair scheduling (Dreyfus, 1957; Duncan and Scholnick, 1973),

where the age and condition of the respective equipments is taken as the state,

subject to random failures, and decisions correspond to repair operations, inspec-

tions, or the purchase of components;

• Queueing management (Ignall and Kolesar, 1974; Low, 1974). In these problems,

the number of elements in a queue, which has an uncertain arrival rate, is typically

modeled as the state of the system. Each element is assigned a utility for the

service, as well as a potential wainting cost. Decisions can consist of accepting or

rejecting new elements, or changing the utility of each element, thereby affecting

the arrival rate. This is a particularly appropriate field for the use of the CTMDP

and SMDP frameworks;

• Agriculture (Kristensen, 1994; Onstad and Rabbinge, 1985), where the population

level of a species can be modeled in the presence of limited resources or epidemic

events. Decisions can include disease treatment options, or the introduction of

new individuals.

The reader is referred to Feinberg and Shwartz (2002); White (1993) for more thor-

ough surveys of real and potential MDP applications.

Another particularly relevant area of real-world probabilistic decision making, in

which there is growing interest in the theory of Markov processes, is that of medical

prognosis and treatment (Ahn and Hornberger, 1996; Hauskrecht and Fraser, 2000;

Shechter et al., 2008). However, this field most often uses Markov models to perform a

posteriori analysis and inference over real data, and there are still few reported imple-

mentations of decision-making policies obtained through the solution of MDPs, as part

of the clinical process (Díez et al., 2011; Schaefer et al., 2005; Sonnenberg and Beck,

1993).

Due to pioneering works such as those of Witten (1977), Sutton (1984), and Watkins

(1989), MDPs became known as a fundamental framework for the study of Reinforce-

ment Learning (RL), triggering rapid development in that sub-field of AI. The increased

33

Page 48: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3. ON THE PRACTICAL IMPLEMENTATION OF MDPS ANDRELATED MODELS

availability of computational power, together with the advances in dynamic program-

ming, enabled the development of a wide number of successful game-playing agents, for

scenarios such as backgammon (Tesauro, 1992), Tetris (Tsitsiklis and Van Roy, 1996),

and chess (Thrun, 1995), among others (Kaelbling et al., 1996). MDP-based RL has

also become, since then, an increasingly popular option to address complex robotics ap-

plications (Kober et al., 2008; Mahadevan and Connell, 1992; Mataric, 1994; Ng et al.,

2006; Riedmiller et al., 2009), due to their performance and simplicity of implementation

when compared to model-based approaches.

In many real-world scenarios, the assumption of full observability is often impracti-

cal, and standard MDP models cannot provide acurate representations of the underlying

systems. The POMDP framework provides the necessary tools to model such partially

observable problems, and so extends the domain of practical applications which can

be formally addressed through decision-theoretic approaches. Early documented ap-

plications of POMDPs to real-world settings were mainly related to inspection and

maintenance problems, similar to those described for the MDP case, but in which the

condition of an equipment can only indirectly known through examination, and so the

true state of the system remains unknown (Eckles, 1968; Ellis et al., 1995; Ohnishi et al.,

1986). Since then, and accompanying the development of this framework, POMDPs

have been shown to be suitable models for a wide range of applications (Cassandra,

1998b), including:

• Population modeling and control, when considering an exact enumeration of in-

dividuals to be infeasible, such as in fisheries (Lane, 1989);

• Corporate decision-making, for problems such as determining whether or not to

perform an internal investigation of a given department, the status of which is

only known indirectly through periodic reports (Kaplan, 1969);

• Financial applications such as dynamic price allocation with unobservable demand

(Aviv and Pazgal, 2005) or stock portfolio management (Wang and Cao, 2011),

considering that stock values are volatile and subject to random changes;

• Spoken dialog management (Williams and Young, 2007; Young et al., 2010), con-

sidering that speech recognition is fallible, and so the “state” of a dialog can only

be observed with uncertainty;

34

Page 49: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3.1 A Review of MDP-Based Applications

• Medical decision making (Hauskrecht and Fraser, 2000), extending similar MDP

applications to the more realistic setting in which the presence of an underlying

disease can only be tested through investigative actions. Real-world applications

of continuous-time POSMDPs in this domain have been reported by White et al.

(1982);

• Robotics problems, in which POMDPs often provide more realistic models than

their fully observable counterparts, since the sensing capabilities of the agents can

then be taken into account in the decision-making process. POMDPs have been

applied to mobile robot navigation (Koenig and Simmons, 1998; Simmons and

Koenig, 1995; Spaan and Vlassis, 2004), object grasping (Grupen and Coelho,

2002; Hsiao et al., 2007), target tracking, (Hsu et al., 2008), control of unmanned

aerial vehicles (Miller et al., 2009), and diverse other robotics applications (Paquet

et al., 2005; Pineau and Thrun, 2002). The POSMDP framework has also been

briefly studied in these scenarios (Lim et al., 2004; Mahadevan and Khaleeli, 1999);

However, of the reported potential applications of the POMDP framework, only a

small subset is known to have produced results affecting real-world decision making

problems (of the above, Eckles (1968); Ellis et al. (1995); Ohnishi et al. (1986); White

et al. (1982)). It is fair to conclude that in real-world applications, designers tend

to compromise between model realism and computational/representational complexity,

opting for simpler, if inaccurate, models such as MDPs in order to avoid the inherent

intractability of large-scale partially observable problems.

Although RL techniques have also been extended for POMDPs (Jaakkola et al.,

1995; Kaelbling et al., 1996), the necessary increase in the computational complexity

of approximating globally optimal policies through learning (an NP-hard problem) has

stunted the application of these techniques beyond those of academic examples.

Research on multiagent decision making has been increasingly active over the last

decade (Arai et al., 2002). MDP extensions to decentralized systems have been studied

for various applications, such as multi-robot navigation (Becker et al., 2004), robotic

teamwork in game-playing scenarios (Spaan and Groen, 2003; Wu and Chen, 2008),

search & rescue robotics (Oliehoek and Visser, 2006; Paquet et al., 2005), sensor net-

works (Nair et al., 2005) and distributed systems (Pajarinen and Peltonen, 2011; Tao

35

Page 50: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3. ON THE PRACTICAL IMPLEMENTATION OF MDPS ANDRELATED MODELS

et al., 2001). Note that, since the decision theoretic frameworks which are here being

considered assume a cooperative nature between agents, adversarial situations, such as

common financial applications, are not here considered (these scenarios fall typically in

the domain of stochastic game theory, although some MDP extensions have been shown

to accommodate adversarial representations (Ng et al., 2010)).

Even though there is an active interest in the the theoretical development of multi-

agent decision-making through MDPs, the application examples which have been stud-

ied so far are mostly conceptual. There are noteworthy exceptions (Capitán et al.,

2011; Matarić, 1997), in the field of cooperative robotics. However, to date, and to

the author’s knowledge, real-world implementations of decision theoretic frameworks to

multiagent problems, outside of academic scenarios, are still inexistant. This work sets

out to adress the limitations of these frameworks which are precluding their practical

implementation, in the domain of cooperative robotics.

3.2 POMDPs for Real Teams of Robots: A Small-Scale

Example

An example of the application of a decision-theoretic framework will be introduced in

this Section, in the context of a cooperative robotics task. This will serve a three-

fold purpose: it will describe the typical sequence of steps involved in modeling a

realistic task under a MDP or a related framework; by doing so, the obstacles to the

implementation of these frameworks in complex scenarios will be made evident; lastly,

it will act as context for a review and description of different techniques which have

been proposed, in the associated literature, to address those difficulties.

The case-study for this example is robotic soccer, a widely known environment

for cooperative robotics (Riedmiller et al., 2009; Spaan and Groen, 2003). The task

(detailed in the following section) is tested in a realistic simulator, taking into account

the physical properties of the agents, and executed as part of a real robotic middleware,

which acts as an interface between the sensors and actuators of the robots, and higher

level modules which will select actions based on a decision-theoretic model.

36

Page 51: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3.2 POMDPs for Real Teams of Robots: A Small-Scale Example

3.2.1 Description of the Case Study

A common task in robotic soccer is to have various robotic soccer players cooperating in

order to take the ball towards their opponent’s goal. Since this is a conceptual exercise,

the analysis will here be limited to a scenario with two cooperating robots, although the

same considerations could be made for a scenario with a larger team of robots (assuming

that at least one computationally tractable model for the problem exists).

In the two-agent case, one of these players should carry the ball forward, and the

other should position itself so that it may receive a pass from its partner, if necessary.

The robots may choose to pass the ball in order to avoid imminent obstacles, since

it is difficult to avoid obstacles while carrying the ball. The robot that carries the

ball at any given time will be referred to in this case study as the “Attacker”, and its

partner the “Supporter”. Whenever a pass occurs, the roles of the robots should switch,

meaning that an Attacker becomes a Supporter and vice-versa. The Attacker should

then kick the ball to the goal as soon as it detects an opportunity to score. The initial

position of the robots and of the ball in their field of play is unknown, and so is their

role. They should then determine which robot should carry the ball. During the course

of their task, the robots may encounter obstacles that they should be able to avoid,

although the position of these obstacles is not known beforehand. The robots possess

sensors to detect their own location, with uncertainty, the position of the ball, and any

surrounding obstacles.

3.2.2 Identifying an Appropriate DT Framework

Given this description, the first step in modeling this problem is to select the appropri-

ate decision-theoretic framework (from among the several exising alternatives, some of

which have been described in Chapter 2), which can account for its requirements.

The following queries act as a guideline to identify the potential DT frameworks for

a given scenario, according to each of the fundamental qualities of a generic decision-

theoretic model, presented in Figure 2.1:

1. Is the state of the system fully observable?

2. Are there multiple agents involved, and is free communication reasonable?

3. Is the influence of continuous time relevant for the decision-making process?

37

Page 52: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3. ON THE PRACTICAL IMPLEMENTATION OF MDPS ANDRELATED MODELS

Among the available frameworks that satisfy these requirements, the basic rationale

for a particular selection is simple: the framework with the lowest associated compu-

tational complexity should be selected, as long as it does not carry unreasonable as-

sumptions which invalidate its implementation in practice. However, what constitutes

an “unreasonable assumption” depends on the level of realism which is expected of the

model. The quality of the resulting solution is ultimately subject to the consideration

of the model designer. This leads to situations where the same problem can be mod-

eled by different frameworks, under different assumptions. One immediate example is

that of medical decision aid: several reported solutions assume the states of the system,

directly related to the health of a patient, to be fully-observable through laboratorial

procedures or other forms of analysis (Schaefer et al., 2005); others take into account the

limited reliability of diagnostic procedures, treating the underlying causes of symptoms

as partially-observable (Hauskrecht and Fraser, 2000; White et al., 1982). Evidently,

the difference between these approaches, in terms of quality, is one of reliability on a

given solution.

Although multiagent scenarios are usually trivial to identify, the availability of com-

munication raises an important problem regarding model tractability: as it was pointed

out in Section 2.4.1, scenarios in which communication policies must be determined

alongside action policies, or in which no communication is possible, are subject to an

exponential increase in computational complexity when trying to obtain an approxi-

mately optimal policy. This problem, in itself, constitutes one the major obstacles to

the practical implementation of decentralized decision-theoretic models in real scenar-

ios. This issue is here given additional relevance, since it forms one of the topics in

which this work will provide a novel solution.

Remark 3.2.1. Multiagent decision theoretic frameworks which explicitly model com-

munication, or which consider non-communicative agents, are difficult to scale up to

realistically-sized problems.

One must also take into account the amount of information which is available, or

which can be empirically determined, regarding the stochasticity of the environment.

Reinforcement learning solutions are suitable to handle situations in which this infor-

mation is not known reliably, but come at the cost of a potentially lengthier process

until an acceptable solution is found, due to amount of necessary training.

38

Page 53: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3.2 POMDPs for Real Teams of Robots: A Small-Scale Example

Applying these aforementioned considerations to the presented robotic soccer case

study, it is clear that the specified task involves multiple agents, in a partially ob-

servable environment (the observations of each agent are uncertain). The selection is

then limited to the decentralized extensions of POMDPs. Of these, the simplest is the

MPOMDP class (which, as was described in 2.4.2, is equivalent to a team-wise single

agent POMDP) although it assumes free communication between agents. Since this is

not limiting assumption in this particular situation, this will be chosen framework to

model the problem.

3.2.3 Modeling States, Actions, and Observations

In order to contain the described task in an MPOMDP, further information about

the environment is needed, in order to identify a possible set of states for the overall

system. The robots are known to operate on a soccer field, which contains the agents

themselves, the ball, and an unknown number of opponents. Except for these obstacles,

their navigation is free inside the field. The state of the overall system can then be

encoded through the localization information of the robots, the position of the ball, and

the positions of possible obstacles. All of these quantities are continuous, and physically

measurable by a hypothetical external observer.

Modeling inherently continuous physical variables into a discrete structure, as re-

quired by typical (PO)MDPs, is an issue worth careful consideration. Consider the

problem of discretizing the system state: on one hand, the number of states should

be evidently kept to a minimum, since this will reduce the overall complexity of the

problem; on the other, reducing the number of possible states may aversely affect the

performance of the agents, since the state of the system needs to descriptive enough for

the resulting policy to make practical sense.

The actions which should be made available to the agents are also dependent upon

the particular form of the state space which is chosen for the problem. Figure 3.1 repre-

sents this tradeoff, by comparing two different approaches to the problem of discretizing

this conceptual robotic soccer environment, with regard to the location of the agents.

Finer, grid-like based discretizations, which can accurately describe the position of an

agent in the physical world (Figure 3.1a) are typically useful for problems of navigation

under uncertainty, and require an appropriate discretization of low-level motion controls

39

Page 54: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3. ON THE PRACTICAL IMPLEMENTATION OF MDPS ANDRELATED MODELS

=

(a)

1 2

3

a

a

(b)

Figure 3.1: Two possible state discretizations for a robotic soccer environment:

(a) A grid-like metric discretization, typical for navigation problems. The actions of a robot

may move it to adjacent cells; (b) A discretization based on logical predicates, useful for

high-level decision making. Here, states take a more logical interpretation, such as “being

in either side of the field” (states 1 and 2), or “being near the goal” (state 3). Actions may

be seen as complex behaviors for a robot. The represented action attempts to move the

robot closer to the goal. However, the Markovian property of the system may be broken.

(such as moving in a given direction for some amount of time/space). However, in par-

tially observable domains, the fan-out of possible belief states at the problem horizon

h grows double-exponentially (there are (|Ai||Oi|)h belief states at step h), which can

make these approaches quickly intractable. An alternative is to consider a state space

description such as the one depicted in Figure 3.1b, in which states are more closely

related with logical predicates (such as if the robot is near the goal, or in its own side of

the field), but carry less information about the particular configuration of the agent in

physical space. This is useful for high-level decision making, since it allows for descrip-

tions of more complex actions, such as pushing a ball towards the goal, which assume

a greater level of autonomy and knowledge of the environment.

As for observations, similar arguments can be made. For fine, grid-like discretiza-

tions, the model may consider characteristic features of the environment which are

observable in a given cell (such as a detected corner, line, etc.), although the num-

ber of such possible features should be kept to a minimum, which in turn may result

in discarded information. For higher-level descriptions, it may be assumed that other

modules operating in the robot are able to refine sensorial information, into a format

which is then useful to evalute certain predicates. This may be, for example, a dedicated

localization algorithm, providing an estimated position in real-time, with an associated

belief distribution over the configuration space of the robot, which then propagates into

40

Page 55: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3.2 POMDPs for Real Teams of Robots: A Small-Scale Example

an uncertainty over the logical values of predicates which require this information.

Given the evident correlation between all model variables and that, typically, none

of them can be particularly justified in an independent manner, it is useful to consider

the problem of modeling the environment as one of selecting the “level” of abstraction

over which the decision-making should be carried out. This, of course, is not only a

problem dependent, subjective decision in itself, but one which is left up to the problem

designer. The designer then requires in-depth knowledge of the system and also of the

selected decision-theoretic framework itself. This may limit the acceptance of MDP-

based frameworks in real-world activities where expert knowledge of these matters is

not readily available.

Remark 3.2.2. The number of states, actions and observations of a discrete POMDP

are free parameters in general robotics applications, and the common criteria for their

selection constitute a trade-off between computational complexity and model realism. The

underlying time between decision episodes, if assumed constant, is abritrarily defined.

Discrete models constitute potentially lossy approximations of continuous domains.

A popular approach in AI to deal with simultaneous decision-making at various levels

of abstraction is to model the problem hierarchically. This approach has been succes-

fully explored both for fully observable MDPs (Parr, 1998; Sutton et al., 1999) and for

POMDPs (Pineau and Thrun, 2002; Theocharous and Mahadevan, 2002; Theocharous

et al., 2005). These approaches typically consider semi-Markov high-level decision the-

oretic models in which the notion of state and action is abstracted, and each such pair

may encompass the execution of a lower-level task, itself modeled as a potentially fully-

Markovian problem. This allows, in some cases, for a representation which is closer

to the decision-making processes carried out in realistic applications, which may, at

different times, fall into different levels of abstraction. It can also provide very ad-

vantageous savings in terms of computational complexity for large-scale problems, and

in terms of the amount of experience required in reinforcement learning applications.

Theocharous and Mahadevan (2002) present a method for reuse learning which allows

solutions for a particular low-level task to be applied in similar domains which share

the same dynamics.

An alternative approach to the problem of modeling inherently physical problems,

is to consider continuous models for the environment from the onset, sidestepping the

need to perform any discretization at all. This poses a complex theoretical obstacle,

41

Page 56: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3. ON THE PRACTICAL IMPLEMENTATION OF MDPS ANDRELATED MODELS

however, since belief distributions become infinite-dimensional structures. In order to

maintain analytical tractability, a possible solution is to represent underlying belief

distributions in a parametric form, either by assuming system linearity (Porta et al.,

2006), by approximating non-linear models through switching-mode linear components

(Brunskill et al., 2010), or by approximating the true distribution within a family of

known closed-form alternatives (Zhou et al., 2008). Another popular approach is to

use particle-filter techniques to approximate non-parametric distributions (Porta et al.,

2006; Thrun, 2000). It should also be noted that most of these solutions focus on

continuous-state POMDPs, and few can also handle continuous action and observation

spaces (Porta et al., 2006).

Returning to the robotic soccer case study, and recalling that the focus of this example

is on conceptual clarity, a simpler approach is taken regarding environment modeling.

The problem is considered to be one of high-level decision making, in accordance with

the description presented in Section 3.2.1, where it is assumed that the robots are

equipped with basic behaviors, and that the decision-making problem is one of cooper-

atively deciding between them.

Regarding localization, the field of play is discretized into four different sections,

similar to the representation of Figure 3.1b. The agent may be located in its own half-

field, in its half-field of the opponent team, near the opponent team goal, or in a shooting

opportunity, which requires not only for the robot to be near the goal while carrying

the ball, but also turned towards it. The robot may also use localization information

to sense if it is ready to receive a pass from its partner.

The information regarding obstacles can be encoded in a logical form, in the sense

that the robot is either blocked by obstacles or free to move in its current direction.

Finally, the robot is also able to detect whether or not it is in possession of the ball.

Note that the robots share their localization information —they require this infor-

mation in order to be able to follow each other and to be able to sample their own

observations (see Section 3.2.6). This is accomplished through explicit communication,

as described in the following sections. None of the remaining state variables (informa-

tion about the ball and obstacles) is shared, since they are not required by each robot’s

partner. This means that, given this information, the robot is able to estimate (with

uncertainty) its own state, but not his partner’s.

42

Page 57: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3.2 POMDPs for Real Teams of Robots: A Small-Scale Example

According to this description, each agent may then be in one of the thirteen local

states si ∈ Si described in Figure 3.2. However, these local states are not independent

between agents, i.e. the problem specification mandates that one of the robots is an

Attacker and the other a Supporter. Therefore, the total number of joint states is sixty,

which results from the admissible combinations of local states. It is important to note

that this does not affect the partial observability of the agents. They may still receive

conflicting observations (for example, that they are both Attackers or Supporters).

This description of the state space possesses an evident structure which could be

harnessed by considering a factored representation (Section 2.4.3). Indeed, every level in

the tree of possible states shown in Figure 3.2 could be encoded as a lower-dimensional

state factor. Factored model descriptions are particularly efficient for large-scale or mul-

tiagent problems (Oliehoek et al., 2008b; Poupart, 2005), making them vital concepts

for general realistic applications. In this problem, since the state space is sufficiently

small to be handled by standard POMDP solvers, a flat model is considered for simplic-

ity (i.e. a model where each component is indexed directly, as opposed to a combination

of factors).

Since the complexity of most POMDP algorithms is heavily dependent on the number

of observations, it follows that this set should be as reduced as possible. Also, since the

robots must be able to switch roles, this set is necessarily the same for each agent. The

set of possible observations is described, for each agent, as follows:

• Own Half —Observing the ball in the half-field belonging to the agent’s team;

• Opponent’s Half —Observing the ball in the opposing team’s half-field;

• Near Goal —Observing the ball near the goal of the opposing team;

• Ready —for an Attacker, this signals that the robot is in a shooting opportunity.

For a Supporter, this implies that the robot is able to receive a pass;

• Not Ready —signal received by an Attacker without the ball, or a Supporter which

isn’t ready to receive a pass.

• Blocked —if the agent is blocked by obstacles.

43

Page 58: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3. ON THE PRACTICAL IMPLEMENTATION OF MDPS ANDRELATED MODELS

The preceding signals encapsulate all necessary information about the environment,

resulting in thirty-six possible joint observations. The construction of the associated

observation model may be further simplified by exploiting observation independence

between agents (Section 3.2.4).

Note that the observation set for each agent does not depend on its specific role

as either an Attacker or a Supporter. In some instances (the Ready and Not Ready

signals), it is possible to use the same representations implicitly for both cases, since

each agent will take its own observation into context through the observation function.

These observations are, in themselves, quite abstract, and each of them depends on

possibly more than one source of information. To achieve this level of abstraction in real

robots, it is necessary to implement a set of “virtual sensors”, algorithms that classify

the information collected by the physical sensors of the robot, or that which is provided

by an intermediate post-processing step of that raw data, into one of the observations

defined above. This step is required whenever the dimensionality of raw sensorial data

is larger than the number of desired observations (which is the general case).

Various approaches have been taken in the literature to address this issue. The

simplest is to manually design a set of decision rules that classify raw sensor data into

the desired set of features, as in the work of Nourbakhsh et al. (1995).

A more general approach, taken by Spaan and Vlassis (2004), is to apply a clustering

technique, such as k-means clustering, to a data set (or to a suitable lower-dimensional

projection of it, if the data is too large), producing |O| class representatives. Each new

data point is then classified according to its nearest representative vector.

Another alternative is to apply neural network training methods to the raw sensor

data, as described by (Mahadevan and Khaleeli, 1999), or to use closely related multi-

class Support Vector Machine (SVM) techniques (Duda et al., 2001).

The remaining component of this model that must be described is the set of joint

actions, A. As with the observation set, this set is identical for both agents. For this

particular task, the following actions are sufficient:

• Dribbling the ball towards the goal;

• Shooting (or kicking) the ball;

• Passing the ball towards the other robot;

44

Page 59: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3.2 POMDPs for Real Teams of Robots: A Small-Scale Example

Figure 3.2: Diagram showing the different possible local states the agent may be in. Note

that they are not independent, since for one agent to be the Attacker the other must be

the Supporter and vice-versa.

• Recovering the ball if it becomes lost;

• Following the attacker;

• Finding a position where a pass can be received (finding clearance for the pass).

Logically, the first four actions described in this manner should be performed by the

Attacker robot, while the remaining actions should be taken by the Supporter.

Each of these high-level actions is then interpreted by the middleware of the robotic

agents, and triggers a series of more basic behaviors, that may possess its own local

decision-making loops. When the robot decides to dribble towards the goal, for example,

these lower-level behaviors ensure that the robot is always turned towards the goal, and

supply the robot with the necessary controls so that it may drive the ball and try to

avoid any imminent obstacles.

An issue of particular relevance to the practical implementation of decision policies

obtained through DT techniques, and one which is often overlooked, is the specification

of the domain of execution of each of the modeled actions. Although, as previously

stated, actions correspond to abstractions of state transitions, since state transitions are

not necessarily observed in partially-observable domains, each action must be associated

with an observable condition which signals its termination, and prompts the agent to

perform a new decision. As discussed in Section 2.2, this may lead to the definition

of actions with static or dynamic termination, and merits further discussion. Consider

once more, the type of state space discretizations shown in Figure 3.1. In any of these

cases, if a constant temporal duration is assumed for each action, it becomes clear

that there is a direct dependency between the execution time of these controls and the

45

Page 60: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3. ON THE PRACTICAL IMPLEMENTATION OF MDPS ANDRELATED MODELS

resulting state distribution after performing the associated action. The selection of the

appropriate execution time for each action is usually made with respect to the considered

state space, so that there is an appreciable transition probability to other states after

its conclusion. In order to maintain the coherency of the stochastic models associated

with a given problem, the most common approach is to assume that the time which

is required for each state transition to occur, under a particular action, is constant.

At the practical level, this constant time interval is usually generalized for all possible

transitions and actions. In other words, this means that decisions taken at t = 1, . . . , n

are usually assumed to be equally spaced in time (Lindqvist, 1962; Matarić, 1997; Rong

and Pedram, 2003). This is equally useful to maintain synchronous decision-making in

multiagent systems. This is also the approach taken in this case-study.

Note that it would also be possible to specify spatial constraints on the actions

of an agent (i.e. “move forward for two meters”). In fact, any condition which is

directly observable to the agent would suffice. However, temporal constraints are often

preferrable since they account for unmodeled failures in the execution of an action, in

the form of a “time-out”, which prevents the agent from becoming indefinitely stuck,

unable to proceed with its decision-making.

With any static termination condition, however, the transition and observation prob-

abilities will not necessarily remain constant over all possible assignments to the un-

derlying continuous variables which surjectively correspond to a single state. This will

tend to break the Markovian property of the system, particularly in higher-level state

descriptions such as that of Figure 3.1b.

Another disadvantage of placing static termination conditions on actions is that it

disregards the reactive nature of robotics environments. If the temporal duration of an

action is too large, then the robot loses its ability to react to sudden changes in its state,

as when it encounters obstacles along its course, for example. Even if the robot is able

to receive an observation consistent with these changes, it may not be enough to alter

the belief of the agent sufficiently for it to perform the desired action (the observation

model has to be narrow in this sense). If the elapsed time between iterations is too

small, then there is not enough time for the robots to transition to another state, and

so the transition function would become flat and uninformative.

Remark 3.2.3. Actions with static termination conditions prevent agents from reacting

to sudden changes in the environment.

46

Page 61: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3.2 POMDPs for Real Teams of Robots: A Small-Scale Example

A more consistent approach is to place logical, observable conditions on the execution

of each action, For dynamically terminated actions, however, the sojourn time at each

state can become relevant to the decision-making process, turning the problem into an

explicit POSMDP.

3.2.4 Obtaining the Stochastic Models

Given the set of joint observations and the set of joint actions, it is then necessary to

define the transition function T and observation function O. However, in construct-

ing these models, it is advantageous to reduce their size as much as possible, thereby

simplifying model definition, and increasing memory efficiency.

As discussed in Section 2.4.3, agent-wise factorizations of model components are

typically available in situations where the interaction between agents is sparse. Consider

the problem of defining T . Ideally, if the local state of an agent was not influenced by

the actions of the other agent, there would be an independent transition function for

each agent, Ti, such that

T (s′, s,a) = T1(s′, s, a1)T2(s

′, s, a2) (3.1)

for every s, s′ ∈ S, 〈a1, a2〉 ∈ A. However, from the above description of the action set, it

is evident that for some of the actions (namely, passing and shooting), this assumption

is not valid, since the execution of one of these actions by an agent may induce its

partner to switch its role. Nevertheless, it is valid for all of the remaining actions. This

constitutes a straightforward instance of context-specific independence.

The problem may still be further simplified by noting its symmetry. Since there

is no characteristic feature to distinguish one agent from the other, their transition

functions are identical, T1 = T2. This means that it is only necessary to consider the

effects of the four possible independent actions for each agent (which is a considerable

reduction from the thirty-six possible joint actions). Also, since only half of the states

correspond to a specific agent being Attacker or Supporter, this means that, in matrix

form, the transition function for each of the independent actions is block-diagonal (i.e.

it is impossible to transition from being an Attacker to a Supporter by applying these

actions). For the joint actions that are not conditional independent, their distribution

over the possible sixty states must be obtained.

47

Page 62: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3. ON THE PRACTICAL IMPLEMENTATION OF MDPS ANDRELATED MODELS

For the observation model, a similar rationale can be made, but in this case the

problem is further simplified by noting the full observation independence in this par-

ticular MPOMDP. At first sight, the passing and kicking actions could be understood

to also influence the observations of the respective partner robot, but this is indeed not

the case, since the observations have been defined independently for each agent in each

state, and the actions taken by the partner robot do not influence the ability of each

agent to perceive its respective information. The joint actions in this task can then be

said to be non-informative, in that they may influence the state of the system, but not

the information collected by the agents. This type of problem can also be appropriately

termed as Observation Independent (Goldman and Zilberstein, 2004). In practice, this

means that:

Pr(o|a, s′) = Pr(o1|a1, s′)Pr(o2|a2, s

′) (3.2)

= O1(o1, a1, s′)O2(o2, a2, s

′) , ∀o ∈ O,a ∈ A, s′ ∈ S (3.3)

It should also be noted that O1 = O2.

Although the transition and observation models for this particular MPOMDP have

been found to be relatively simple to define in theory, in order to rigorously obtain these

models in practice, it is necessary to estimate the respective probability distributions

either by collecting large amounts of experimental data, or by possessing a priori expert

knowledge of the dynamics of the system. In high-level problems such as the current

case study, the latter option is not viable, since the underlying physical processes of the

problem have been abstracted.

Furthermore, a consequence of having broken, to some extent, the Markovian nature

of the system through arbitrary discretization, is that the resulting transition and ob-

servation models carry less useful information regarding actions and observations. That

is to say, when performing an action with a static termination in an abstract state,

the probability of actually moving to another state may drastically change according

to the precise, unmodeled, physical configuration of the robot. The expected value for

this probability, over a number of runs, will likely differ from its true value, result-

ing in less information taken into account during decision-making, and correspondingly

lower-quality solutions.

48

Page 63: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3.2 POMDPs for Real Teams of Robots: A Small-Scale Example

Also note that all decision theoretical frameworks presented so far assume that the

stochastic models for the problem are stationary (time-independent). Although, from

a theoretical standpoint, this would not constitute a necessity, if that would not be the

case, then the problem of obtaining reliable time-varying stochastic models for large

scale scenarios would be extremely complex.

An important practical issue regarding solution methods for MDPs and related mod-

els is that, if knowledge of accurate stochastic models for the environment is assumed,

then the sensitivity of the resulting policy to changes in these models is unspecified.

Since the solution can potentially be different, it must be calculated anew if any para-

maters are indeed changed, which may occur, for example, when more reliable data

regarding the system becomes available.

Remark 3.2.4. Decision-theoretic approaches which assume complete knowledge of the

stochastic models of the system are inflexible to change. Any modification to these

models implies a re-calculation of the associated policy.

Note that this, in turn, represents a reason to advocate reinforcement learning ap-

proaches which do not assume any knowledge (or assume imperfect knowledge (Sutton,

1991)) about the environment.

3.2.5 Defining the Reward Model

So far nothing has been said about the reward structure for this particular robotic task.

As it has been so far made clear, any solution method will attempt to maximize some

form of reward. The resulting policy, however, may conflict with the solution which is

expected by the designer, since there is no definitive criteria as to how these rewards

should be assigned. Typically, the designer has limited qualitative knowledge of desired

states, or state-action pairs, and assigns positive or negative rewards accordingly. At a

hypothetical extreme, the designer could in fact encode the optimal policy in the reward

model itself, although, if that knowledge was available, that would remove the need for

any sort of decision-making. In this case-study, the definition of the reward model was

simply to attribute a high reward for kicking the ball in a shooting opportunity, and

to penalize every other step taken. The particular values assigned to those rewards,

however, were seen as arbitrary.

49

Page 64: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3. ON THE PRACTICAL IMPLEMENTATION OF MDPS ANDRELATED MODELS

There is often a process of trial-and-error until a reward model is obtained which

induces a policy with satisfactory practical quality. A rule of thumb is to include as

little information in the reward model as necessary. This is valid both for model-based

and reinforcement learning methods. However, model-based solutions suffer from the

same problem of inflexibility with respect to reward as they do with stochastic models.

This, coupled with the fact that the reward model may be re-defined multiple times

during design, results in a overly lenghty process of planning.

Remark 3.2.5. Model-based solution methods are inflexible to changes in the reward

structure.

It is here argued that a solution method oriented towards real-world applications

should have some measure of adaptability, similar to the approach proposed by Sutton

(1991), to account for possible changes in the definitions of the various models, without

necessarily inducing a complete re-calculation of the respective solution.

3.2.6 Communication

In order for the problem to be modeled as an MPOMDP, it was assumed that the robots

could communicate their observations freely to each other. However, the optimal actions

of the agents are dependent on the joint belief, which in turn depends on the observations

of both agents. The actions of other agents are not observed, but may be inferred, since

the joint policy is common knowledge. For this information to be coherent, the robots

must keep synchronized when carrying out their policies, i.e. they both must execute

an action at (approximately) the same time. This way, the agents will calculate the

same joint belief at each step.

The synchronization process and necessary explicit communication is performed

according to the diagram in Figure 3.3. Each agent is assumed to perform its own com-

ponent of the maximizing joint action for a fixed time step, τ . The agents’ observations

are only available after the outcome of this action is known. Therefore, after τ has

passed, the agents sample their own observations and exchange it with that of their

partner. If one of the agents is delayed, then its partner will wait for this information

before proceeding. This step is where synchronization is enforced between both robots.

This information is then used to locally calculate the joint belief through (2.20). After

50

Page 65: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3.2 POMDPs for Real Teams of Robots: A Small-Scale Example

Figure 3.3: Synchronization timeline with instantaneous communication.

the joint belief is obtained, each agent computes a new maximizing joint action and

proceeds to the next step.

The imposition of synchronous decision-making is, itself, a potential disadvantage

of this type of approach in practice. For large teams of agents, it might not be feasible

to wait for synchronization, or to assume perfectly reliable communications at each

step. This is often the case in robotic soccer, since the high number of agents quickly

saturate the communication medium. In such a setting, it can be advantageous not to

wait for synchronization between all involved agents. In this sense, the agent would

select an optimal action based on its own local observation, and receive its partners’

data throughout the decision step (Figure 3.4).

Remark 3.2.6. Multiagent, discrete DT frameworks assume decisions are taken syn-

chronously across all agents. The rate at which decisions are taken is a potentially

arbitrary parameter, and deeply affects the stochastic models of the system.

3.2.7 Results

Results of the solution of the aforementioned case-study are here shown for demonstra-

tional purposes.

51

Page 66: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3. ON THE PRACTICAL IMPLEMENTATION OF MDPS ANDRELATED MODELS

Figure 3.4: Synchronization timeline with delayed communication of observations.

The Perseus algorithm (Spaan and Vlassis, 2005) was selected to solve the MPOMDP,

due to its efficiency in handling moderately-sized problems. Perseus belongs to the fam-

ily of point-based POMDP solvers, but it is by no means the only one (Kurniawati et al.,

2008; Pineau et al., 2003). While it is true that often the algorithm to solve a given

POMDP model should be chosen according to the problem’s structure, this does not

create, in this case, a dependency on any particular algorithm.

The Perseus algorithm performed favorably, and converged in as few as 100 it-

erations, as can be seen in Figure 3.5. Naturally, such a value function is a good

approximation of a stationary solution, i.e., a solution that assumes an infinite horizon.

The execution of the task itself was tested in a realistic simulation environment,

equipped with physics modeling. The two agents were placed in arbitrary initial po-

sitions in the field of play, and the ball was initially placed in the center of the field

(which is the origin of the world frame for the robots). The time step within which these

agents should select an action was set to 3 seconds. The transition model was then de-

signed so as to allow a considerable (at least 0.4) chance of performing a transition to

a neighbour state when dribbling the ball (this was done empirically). The observation

model for these robots considered both the possibility of having false positive and false

negative detections of obstacles (0.1 probability of failing to detect an obstacle and 0.05

52

Page 67: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3.2 POMDPs for Real Teams of Robots: A Small-Scale Example

Figure 3.5: Convergence of the Perseus algorithm for the proposed Dec-POMDP.

probability of detecting an inexistent obstacle).

Generally, the robots succeeded in their task in an efficient manner, despite the

coarse, high-level modeling approach. In the instantaneous communication case, the

expected reward was approximately ∼ 360, with 150 the immediate reward for scoring

a goal, −1 for all other actions and a discount factor of 0.95.

The effect of receiving delayed information is also investigated. Spaan et al. de-

termined that the solution of a Dec-POMDP in such a setting, with up to one step of

delay in the communication process, can be acheived through a series of Bayesian games

(Spaan et al., 2008). Using a slightly modified version of the Perseus algorithm, results

were obtained for this case. The robots performed less favorably with ∼ 30 expected

reward. The comparative results shown in in Figure 3.6 were obtained by simulating

both policies for 500 runs. The fact that this reward is positive, however, demonstrates

that even in such a case, the robots are still able to cooperate in order to score goals,

albeit notably less efficiently.

Two different situations are here presented that highlight the performance of the

desired robotic task. These refer to the policy obtained assuming instantaneous com-

munication.

In Figure 3.7, the positions of the robots and of the ball that were recorded from the

simulator in a typical situation are shown, assumting the joint actions start at t1. The

supporter robot (here shown initially through a dashed line) maintains a fixed distance

to the attacker, until at t2 the attacker scores a goal. The ball is reset by the simulator

back to the (0, 0) coordinate. Since the initial supporter is now the closest robot to

53

Page 68: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3. ON THE PRACTICAL IMPLEMENTATION OF MDPS ANDRELATED MODELS

−400 −200 0 200 400 6000

50

100

150

Expected Reward

Num

ber

of O

ccur

renc

es

100 200 300 400 500 6000

20

40

60

80

100

Expected Reward

Num

ber

of O

ccur

renc

es

Figure 3.6: Expected reward collected from simulating the proposed task with instanta-

neous communication (top) and one-step delay (bottom).

the ball, it assumes the role of attacker, and at t3 their roles have been exchanged from

their initial configurations. The process then repeats itself. Note that in this case, there

are no obstacles in the field other than the agents themselves.

A second situation occurs in the presence of obstacles, and is depicted in Figure 3.8.

A barrier of obstacles is placed in front of the initial attacker agent (shown in a dashed

line) and at t2 it selects a pass action since its partner has better clearance to the goal.

Their roles then switch, and the agent shown in blue then carries the ball until it scores

a goal at t3. Note that the initial attacker still decided to carry the ball for a short

amount of time before passing. This is due to the fact that the agent is committed to

performing the latest selected joint action until the predefined time-step expires. This

exemplifies the problem of assuming a constant temporal duration for each action. In

dynamic environments, the robot may not have enough time to select the optimal action

when facing a sudden change in its state.

54

Page 69: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3.3 Conclusions

Figure 3.7: Behavior of the robots after scoring a goal. The positions of the initially

attacking and supporting robots are shown by a filled and dashed line, respectively. The

position of the ball is marked in gray. The goal is located at (9,0) and is 2m wide. t1, t2,

t3, t4 are sequential time instants.

3.3 Conclusions

This chapter presented a brief review of the application areas of the MDP framework and

its associated extensions. The advantage of DT approaches over most current planning

methods is that they are able to automatically formulate plans which are contigent

to stochastic occurences, taking into account imperfect information. This makes these

frameworks particularly suitable for uncertain domains, as is the case of cooperative

robotics.

With the goal of exploring the potential limitations of these frameworks when applied

to cooperative robotics scenarios, a conceptual, albeit realistic, small-scale task in that

domain was introduced. The steps taken to model that task through an MPOMDP

were described, and results were taken from various simulations. The following topics

were perceived as the main shortcomings of decision-theoretic models when dealing with

cooperative robotics scenarios:

• Multiagent frameworks with explicit communication models, or in which no com-

munication is possible, are not generally applicable to real-world problems due to

their intractability. Approaches that assume free communication are susceptible

to considerable loss of quality in the event of communication delays or failures.

Furthermore, synchronous decision-making is not always desirable.

55

Page 70: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3. ON THE PRACTICAL IMPLEMENTATION OF MDPS ANDRELATED MODELS

Figure 3.8: Behavior of the robots when passing to avoid obstacles. The initial attacker,

in this case, is shown by the dashed line. At t2 a pass occurs.

• Actions with static termination conditions disregard the need for reactivity in

dynamically changing environments.

• States, actions, and observations in robotics domains usually possess (at least

some) continuous factors. When modeling these elements in a discrete DT frame-

work, the resolution of that discretization is selected based on computational com-

plexity and model realism. Discrete DT approximations can affect the Markovian

property of the system, they may discard sensorial information, and may lose

reactivity in dynamically changing environments.

• Model based methods are inflexible to changes in any of their parameters. This

means that a new solution must be calculated in the event that new information

regarding the system is made available, or if the reward structure of the model

is tuned. Solving a large-scale DT problem can be a time-consuming procedure,

and this fact can delay the planning process altogether.

Despite these issues, the simulations that were performed show that the robots are

able to complete the proposed task with the given discrete model, a rough high-level

approximation of the behaviors of the robotic agents. The model itself proved to be

solvable in reasonable time.

56

Page 71: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3.3 Conclusions

The remainder of this work will focus on addressing the issues which were here

identified, with the goal of obtaining a full scale, real world application in cooperative

robotics, and providing general guidelines for the implementation of these frameworks

in other realistic scenarios.

57

Page 72: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

3. ON THE PRACTICAL IMPLEMENTATION OF MDPS ANDRELATED MODELS

58

Page 73: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

Chapter 4

Efficient Communication in

Partially Observable Domains

As it has been previously mentioned, the computational complexity of decision-theoretic

planning in multiagent systems is strongly influenced by the type of communication

which is available in a given scenario. Most non-communicative problems, falling in

the general Dec-POMDP class, are NEXP-hard to solve1 (Pynadath and Tambe, 2002).

Adding costs to communication actions, and requiring an explicit communication policy

to be obtained offline, does not reduce this complexity, since non-communicative policies

must also be evaluated in the worst case. However, if agents are able to share their

observations at each step, without any associated cost, then the complexity of the

multiagent problem is the same as that of a centralized, team-wise single agent model.

In many situations, however, communication is not free. A large team of agents

may need to control their communication usage in order to avoid saturating the com-

munication medium. Communicating may also expend valuable energy in scenarios

where agent autonomy is prioritary (such as space exploration). In military applica-

tions, if other entities are attempting to intercept inter-agent communications, then it

also follows that communication should be kept to a minimum.

DT approaches in which communication between agents is possible have already been

explored for non-factored Dec-POMDP models (Pynadath and Tambe, 2002; Roth et al.,

2005a; Spaan et al., 2008; Wu et al., 2009) as well as for factored Dec-MDPs (Roth et al.,

1The only exceptions are unrealistic scenarios in which the environment is either individually fully

observable, or non-observable at all, by each agent, and in which communication makes no difference.

59

Page 74: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

4. EFFICIENT COMMUNICATION IN PARTIALLY OBSERVABLEDOMAINS

2007). The work described in this chapter will focus on factored MPOMDP models. A

novel method is proposed that exploits sparse dependencies in such a model, in order to

reduce the amount of inter-agent communication. To achieve this, a factored belief state

is considered. Individual policies are developed which map beliefs over state factors to

actions or communication decisions.

Maintaining an exact, factored belief state is typically not possible in cooperative

problems. While bounded approximations are possible for probabilistic inference (Boyen

and Koller, 1998), these results do not carry over directly to decision-making settings

(but see McAllester and Singh (1999)). Intuitively, even a small difference in belief can

lead to a different action being taken. However, when sparse dependencies between the

agents’ decisions exist, often the belief over its local state factors is sufficient for an agent

to identify the action that it should take, and communication can be avoided. These

notions are formalized as convex optimization problems, extracting those situations

in which communication is superfluous. Experimental results are present showing the

savings in communication that can be obtain, and the overall impact on decision quality.

This chapter is organized as follows. Sections 4.1, 4.2, 4.3 introduce the relevant

motivation and background for the current work. Section 4.4 presents the formalization

of the proposed method to associate belief points over state factors to actions. Next,

Section 4.5 illustrates the concepts with experimental results, and Section 4.6 provides

conclusions and discusses future work.

4.1 Exploiting Sparse Dependencies in Multiagent POMDPs

In the implementation of Multiagent POMDPs, an important practical issue is raised:

since the joint policy arising from the value function maps joint beliefs to joint actions,

all agents must maintain and update the joint belief equivalently for their decisions to

remain consistent. The amount of communication required to make this possible can

then become problematically large. Here, a fully-communicative team of agents will

be considered, but the necessary amount of communication will be minimized. Even if

agents can communicate with each other freely, they might not need to always do so in

order to act independently, or even cooperatively.

The problem of when and what to communicate has been studied before for Dec-

MDPs (Roth et al., 2007), where factors can be directly observed with no associated

60

Page 75: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

4.2 Decision-making with Factored Beliefs

uncertainty, by reasoning over the possible local alternative actions to a particular

assignment of observable state features. For MPOMDPs, this had been approximated

at runtime, but implied keeping track and reasoning over a rapidly-growing number of

possible joint belief points (Roth et al., 2005a).

A method will be here described that maps a belief factor (or several factors) directly

to a local action, or to a communication decision, when applicable. The proposed

approach is the first to exploit, offline, the structure of the value function itself in order

to identify regions of belief space where an agent may act independently. This raises the

possibility of developing more flexible forms for joint policies which can be efficiently

decoupled whenever this is advantageous in terms of communication. Furthermore,

since this method runs offline, it is not mutually exclusive with online communication-

reduction techniques: it can be used as a basis for further computations at runtime,

thereby increasing their efficiency.

4.2 Decision-making with Factored Beliefs

Recall from Section 2.3.1 that a joint belief state is a probability distribution over the set

of states S, and encodes all of the information gathered by all agents in the MPOMDP

up to a given step n. A factored state model is considered (see Section 2.4.3) with k

state factors Xi. The joint belief is then:

bn(s) = Pr(sn|on−1,an−1,on−2,an−2, . . . ,o1,a1, b0)

= Pr(xn1 , . . . , xnk |·) , (4.1)

where xni is the value of Xi at step n. A factored belief state is a representation of this

very same joint belief as the product of nF assumed independent belief states over the

state factors Xi, which will be refered to as belief factors :

bn = ×nF

i=1bnFi

(4.2)

Every factor bnFiis defined over a subset Fi ⊆ X of state factors, so that:

bn(s) ≃ Pr(Fn1 |·)Pr(F

n2 |·) · · ·Pr(F

nnF|·) (4.3)

With Fi ∩ Fj = ∅ , ∀i 6= j. A belief point over factors L which are locally available to

the agent will be denoted bL.

61

Page 76: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

4. EFFICIENT COMMUNICATION IN PARTIALLY OBSERVABLEDOMAINS

The marginalization of b onto bF is:

bnF (Fn) = Pr

(Fn|a1,··· ,t−1,o1,··· ,t−1

)

=∑

Xn\Fn

Pr(

X n1 ,X

n2 , · · · ,X

nnf|·)

=∑

Xn\Fn

bn(sn), (4.4)

which can be viewed as a projection of b onto the smaller subspace BF :

bF = MXF b (4.5)

where MXF is a matrix where MX

F (u, v) = 1 if the assignments to all state factors

contained in state u ∈ F are the same as in state v ∈ X , and 0 otherwise. This

intuitively carries out the marginalization of points in B onto BF .

Note that, as fully described in Boyen and Koller (1998), the factorization (4.3)

typically results in an approximation of the true joint belief, since it is seldom possi-

ble to decouple the dynamics of a MDP into strictly independent subprocesses. The

dependencies between factors, induced by the transition and observation model of the

joint process, quickly develop correlations when the horizon of the decision problem is

increased, even if these dependencies are sparse. Still, it was proven in Boyen and Koller

(1998) that, if some of these dependencies are broken, the resulting error (measured as

the KL-divergence) of the factored belief state, with respect to the true joint belief, is

bounded. Unfortunately, even a small error in the belief state can lead to different ac-

tions being selected, which may significantly affect the decision quality of the multiagent

team in some settings (McAllester and Singh, 1999; Poupart and Boutilier, 2000). How-

ever, in rapidly-mixing processes (i.e., models with transition functions which quickly

propagate uncertainty), the overall negative effect of using this approximation is mini-

mized.

Each belief factor’s dynamics can be described using a two-stage Dynamic Bayesian

Network (DBN). For an agent to maintain, at each time step, a set of belief factors,

it must have access to the state factors contained in a particular time slice of the

respective DBNs. This can be accomplished either through direct observation, when

possible, or by requesting this information from other agents. In the latter case, it

may be necessary to perform additional communication in order to keep belief factors

consistent. The amount of data to be communicated in this case, as well as its frequency,

depends largely on the factorization scheme which is selected for a particular problem.

62

Page 77: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

4.3 An illustrative example: the Relay problem

R1

R2L2

L1

D

D

1

2

Figure 4.1: The Relay problem.

The problem of obtaining a suitable partition scheme of the joint belief onto its factors

will not be addressed in this work. Such a partitioning is typically simple to identify

for multi-agent teams which exhibit sparsity of interaction. Instead, focus is given to

the amount of communication which is necessary for the joint decision-making of the

multi-agent team.

4.3 An illustrative example: the Relay problem

Consider the following small-scale factored MPOMDP, called Relay, which will be used

as a conceptual example. In this environment, two agents operate inside a four-state

world, represented in Figure 4.1, in which each agent is confined to a two-state area.

One of the agents possesses a package which it must hand over to the other agent.

The goal of these agents is then to “relay” this package between each other through

the opening between the rooms L1 and R1. The actions that each agent can perform

are to Shuffle, Exchange, or Sense. A Shuffle action moves the agent randomly, and

with equal probability, to either position in its area. The Exchange action attempts to

perform the physical exchange of the package between the agents, and is only successful

if both agents are in the correct position (L1 for the first agent, R1 for the second one)

and if both agents perform this action at the same time. If it succeeds, the world is

reset to a random state with uniform probability. The Sense action is an informative

action, which allows the agent to sense whether it is in front of the opening or not, with

probability of both false positives and false negatives. The interesting feature of this

small problem is its sparse dependency between the decision processes of these agents.

Evidently, the only cooperative action that the agents may perform is a joint Exchange.

Since this action can only succeed in a particular joint state, it stands to reason that an

agent which is sufficiently certain of not being in its correct, corresponding local state

63

Page 78: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

4. EFFICIENT COMMUNICATION IN PARTIALLY OBSERVABLEDOMAINS

Shuffle

Exchange

Sense

Figure 4.2: The linear supports of the optimal stationary joint value function for the

Relay problem, colored according to the respective local actions of agent 1.

should always attempt to move there first (via Shuffle). In such a case, this decision can

be taken regardless of the other agent’s state, actions or observations (since the agents

cannot observe each other).

Figure 4.2 represents the linear supports of the optimal infinite-horizon value func-

tion for this example, and the associated local actions of the first agent. Note that the

joint belief is four dimensional, but one of these dimensions may be ommited due to the

fact that∑

s∈S b(s) = 1. The expected behavior can be inferred from this representa-

tion, noting that the Exhange action should only be performed if the agents are certain

of being both in the correct position (bL1,R1 is high), and the Shuffle action should only

be performed if the agent is sufficiently certain of not being in the correct position.

In this scenario, an evident space state factorization is S = XL × XR, where XL =

{L1, L2} and XR = {R1, R2}. The projection of these linear supports onto the smaller

belief factor bXLis shown in Figure 4.3.

Here, an interesting result is already evident: some regions of this belief space are

covered only by a single linear support. In these situations, there is no ambiguity as to

what agent 1 should do. Its expected behavior is here still contained: if the agent has

low probability of being in room L1, then the optimal action is to Shuffle; if it is not

certain enough of being in L1, it should Sense; and Exchange is always an ambiguous

64

Page 79: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

4.4 Formal model

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

bXL(L1)

Shuffle

Exchange

Sense

Figure 4.3: The projections of the linear supports of the joint value function onto the

belief factor over the left room.

action, since it depends on factor XR.

The key idea in the proposed approach is, that in some situations, the local informa-

tion of an agent is enough for it to take locally optimal decisions. If, furthermore, the

belief states over the local state factors are maintained independently, then the agents

might not need to communicate at all between two decisions. The explicit need to com-

municate would only arise in situations where one agent’s optimal action is dependent

upon the other agent’s information. In this example, this corresponds to the case where

one agent is fairly certain of being in the correct place for the exchange. It then needs

to reason over the other agent’s local belief to make sure that an Exchange action is

profitable in terms of expected reward.

4.4 Formal model

The value function, and its associated quantities, are considered to be associated with

a given decision step n, and, for simplicity, this dependency will be omitted. However,

it should be noted that the value function does not need to be stationary – for a finite-

horizon problem, the following methods can simply be applied for every n = 1, . . . , h.

4.4.1 Value Bounds Over Local Belief Space

Recall that, for a given α-vector, Vα(b) = α · b represents the expected reward for

selecting the action associated with α. Ideally, if this quantity could be mapped from

a local belief point bL, then it would be possible to select the best action for an agent

based only on its local information. This is typically not possible since the projection

(4.5) is non-invertible. However, as it will be shown, it is possible to obtain bounds on

the achievable value of any given vector, in local belief space.

65

Page 80: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

4. EFFICIENT COMMUNICATION IN PARTIALLY OBSERVABLEDOMAINS

The available information regarding Vα(b) in local space can be expressed in the

linear forms:Vα(b) = α · b

1T

n b = 1

MXL b = bL

(4.6)

where 1n =[1 1 . . . 1

]T∈ R

n. Let m be size of the local belief factor which

contains bL. Reducing this system, it is possible to associate Vα(b) with b and bL,

having at least n−m free variables in the leading row, induced by the locally unavailable

dimensions of b. The resulting equation can be rewritten as:

Vα(b) = β · b+ γ · bL + δ , (4.7)

with β ∈ Rn, γ ∈ R

m and δ ∈ R. By maximizing (or minimizing) the terms associated

with the potentially free variables, this form can be used to establish the maximum

(and minimum) value that can be attained at bL.

Theorem 1. Let Iu ={v : MX

L (u, v) = 1}, β ∈ R

m : βi = maxj∈Ii βj , i = 1, . . . ,m

and β ∈ Rm : βi = minj∈Ii βj , i = 1, . . . ,m. The maximum achievable value for a

local belief point, bL, according to α, is:

Vα(bL) =(β + γ

)· bL + δ . (4.8)

Analogously, the minimum achievable value is

Vα(bL) =(β + γ

)· bL + δ , (4.9)

Proof. First, it shall be established that Vα(bL) is an upper bound on Vα(b). The set

Ii contains the indexes of the elements of b which marginalize onto (bL)i. From the

definition of β it follows that, ∀b ∈ B:

j∈Ii

βibj ≥∑

j∈Ii

βjbj , i = 1, . . . ,m ⇔

⇔ βi(bL)i ≥∑

j∈Ii

βjbj , i = 1, . . . ,m ,

where the fact that∑

j∈Ii

bj = (bL)i was used. Summing over all i, this implies that

β · bL ≥ β · b. Using (4.7) and (4.8),

β · bL + γ · bL + δ ≥ β · b+ γ · bL + δ ⇔ Vα(bL) ≥ Vα(b)

66

Page 81: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

4.4 Formal model

Next, it needs to be shown that ∃b ∈ B : Vα(bL) = Vα(b). Since 1Tn b = 1 and bi ≥ 0 ∀i,

β · b is a convex combination of the elements in β. Consequently,

maxb∈B

β · b = maxb∈B

β ·MXL b = max

iβi

Therefore, for bm = argmaxb∈B

β · b, it results that Vα(MXL bm) = Vα(bm).

The proof for the minimum achievable value Vα(bL) is analogous.

By obtaining the bounds (4.8) and (4.9), a step has been taken towards identifying

the correct action for an agent to take, based on the local information contained in bL.

From their evaluation, the following remarks can be made: if α and α′ are such that

Vα′(bL) ≤ Vα(bL), then α′ is surely not the maximizing vector at b; if this property holds

for all α′ such that (φ(α′))i 6= (φ(α))i, then by following the action associated with α,

agent i will accrue at least as much value as with any other vector for all possible b

subject to (4.5). That action can be safely selected without needing to communicate.

The complexity of obtaining the local value bounds for a given value function is

basically that of reducing the system (4.6) for each vector. This is typically achieved

through Gaussian Elimination, with an associated complexity of O(n(m + 2)2) (Fang

and Havas, 1997). Note that the dominant term corresponds to the size of the local belief

factor, which is usually exponentially smaller than n. This is repeated for all vectors,

and if pruning is then done over the resulting set (the respective cost is O(|Γ|2)), the

total complexity is O(|Γ|n(m+ 2)2 + |Γ|2). The pruning process used here is the same

as what is typically done by POMDP solvers (White, 1991).

4.4.2 Dealing With Locally Ambiguous Actions

The definition of the value bounds (4.8) and (4.9) only allows an agent to act in atypical

situations in which an action is clearly dominant in terms of expected value. However,

this is often not the case, particularly when considering a large decision horizon, since

the present effects of any given action on the overall expected reward are typically

not pronounced enough for these considerations to be practical. In a situation where

multiple value bounds are conflicting (i.e. Vα(bL) > Vα′(bL) and Vα(bL) < Vα′(bL)), an

agent is forced to further reason about which of those actions is best.

In order to tackle this problem, assume that two actions a and a′ have conflicting

bounds at bL. Given Γa = {α ∈ Γ : (φ(α))i = a} and similarly defined Γa′ , the matrices

67

Page 82: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

4. EFFICIENT COMMUNICATION IN PARTIALLY OBSERVABLEDOMAINS

A = [Γai ]k×n, i = 1, . . . , |Γa| and A′ = [Γa′

i ]k′×n, i = 1, . . . , |Γa′ |are defined. Then,

the vectors v = Ab and v′ = A′b (in R

k and Rk′ respectively) contain all possible values

attainable at b through the vectors in Γa and Γa′ . Naturally, the maximum of these

values for each action will be sought. In particular, the goal is to determine if maxi vi

is greater than maxj v′j for all possible b such that bL = MX

L b. If this is the case, then

a should be selected as the best action, since it is guaranteed to provide a higher value

at bL than a′.

The problem is then to find some b where a′ is the maximal action. It can be

described as the constrained optimization:

minimize maxi

vi −maxj

v′j

subject to v = Ab b � 0n

v′ = A′b 1

Tn b = 1

MXL b = bL

(4.10)

If the solution to this problem is negative, it follows that a′ is maximal at some point

b, which means that neither action can be taken whithout further information. Unfor-

tunately, the target function in this optimization is non-convex. Taking the epigraph of

the first term of the target function, the problem becomes:

minimize s−maxj

v′j

subject to Ab � 1ks b � 0n

v′ = A′b 1

Tn b = 1

MXL b = bL

(4.11)

If the vectors in |Γa′ | (rows of A′) are then taken individually, the problem trivially

becomes the LP:

∀i = 1, . . . , |Γa′ | maximize Γa′

i b− s

subject to Ab � 1ks b � 0n

MXL b = bL 1

Tn b = 1

(4.12)

If the solution bopt to each of these LPs is such that maxi(Abopt)i ≥ maxj(A′bopt)j ,

then action a can be safely selected based on bL. If this is not the case for any of the

solutions, then it is not possible to map the agent’s best action solely through bL. In

68

Page 83: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

4.4 Formal model

order to disambiguate every possible action, this optimization needs to be carried out

for all conflicting pairs of actions.

An alternative is to introduce the slack variable ξ in the constraints of (4.11):

Ab � 1ks b � 0n

A′b = 1k′s+ ξ 1Tn b = 1

MXL b = bL

(4.13)

If the maximum element of ξ is positive at some b, then it can safely concluded that

maxi vi ≤ maxj v′j and therefore the actions are undecidable. The problem of maximiz-

ing the maximum element of ξ, however, is only solvable by splitting ξ into its positive

and negative components, ξ+ and ξ−, and requiring that (ξ+)T · ξ− = 0. The latter

constraint is itself non-convex, and at best it increases the complexity of the optimiza-

tion procedure beyond that of the exhaustive LP (4.12). The full optimization problem

which is here considered is reducible to that of L∞ error minimization, for which there

is no known method (Patrascu et al., 2002). In order to contain this problem as an LP,

these constraints cab be relaxed, and the problem be instead described as:

maximize 1Tk′ξ

subject to Ab � 1ks b � 0n

A′b = 1k′s+ ξ 1Tn b = 1

MXL b = bL

(4.14)

The target function in this optimization is not the same as in the original problem

(4.10), since it instead seeks to find the point b with the highest average difference

between the maximum element of v and the values of A′ (highest mean value of ξ).

While the optimal solution to this problem is typically achieved at a point where ξ has

positive components, this is not necessarily so, and therefore this must be considered

as an approximate solution to the original problem. Since the vectors in A and A′ are

arbitrary as long as the full value function is convex, it is also difficult to establish a

bound on the quality of this approximation. In practice, for the examples which are

here studied, is was found that using (4.14) instead of (4.12) does not noticeably affect

the quality of the resulting communication map, and allows better scalability to larger

domains.

69

Page 84: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

4. EFFICIENT COMMUNICATION IN PARTIALLY OBSERVABLEDOMAINS

4.4.3 Mapping Local Belief Points to Communication Decisions

For an environment with only two belief factors, the method described so far could

already incorporate an explicit communication policy: given the local belief bL of an

agent, if it is possible to unequivocally identify any action as being maximal, then

that action can be safely executed without any loss of expected value. Otherwise, the

remaining belief factor should be requested from other agents, in order to reconstruct b

through (4.3), and map that agent’s action through the joint policy. However, in most

scenarios, it is not sufficient to know whether or not to communicate: equally important

are the issues of what to communicate, and with whom.

Consider now the general problem with nF belief factors contained in the set F .

In this case there are 2|F|−1 combinations of non-local factors which the agent can

request. Our goal is to identify one such combination which contains enough information

to disambiguate the agent’s actions. Central to this process is the ability to quickly

determine, for a given set of belief factors G ⊆ F , if there are no points in bG with

non-decidable actions. The exact solution to this problem would require, in the worst

case, the solution of |Γa| × |Γa′ | LPs of the form (4.12) for every pair of actions with

conflicting value bounds. However, a modification of the approximate LP (4.14) allows

this problem to be tackled efficiently:

maximize 1T

k′ξ′ + 1

T

k ξ

subject to Ab � 1ks A′b = 1k′s+ ξ MXL b = bL

A′b′ � 1k′s′ Ab′ = 1ks

′ + ξ′ MXL b′ = bL

b � 0n b′ � 0n MXG b = MX

G b′

(4.15)

The rationale behind this formulation is that any solution to the LP, in which maxi ξi >

0 and maxj ξ′j > 0 simultaneously, identifies two different points b and b′ which map to

the same point bG in G, but share different maximizing actions a′ and a respectively.

This implies that, in order to select an action unambiguously from the belief over G, no

such solution may be possible.

Equipped with this result, it is now possible formulate a general procedure that, for

a set of belief points in local space, returns the corresponding belief factors which must

be communicated in order for an agent to act unambiguously. This is referred to as

obtaining the communication map for the problem. This procedure is as follows: begin

by computing the value bounds of V over local factors L, and sampling N reachable

70

Page 85: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

4.4 Formal model

local belief points bL; for each of these points, if the value bounds of the best action are

not conflicting (see Section 4.4.1), or any conflicting bounds are resolved by LP (4.14),

mark bL as safe, add it to the communication map, and continue on to the next point;

otherwise, using LP (4.15), search for the minimum set of non-local factors G which

resolves all conflicts; then associate bL with G and add it to the map.

Below is a more detailed, pseudo-code description of this algorithm. The inputs to

the algorithm are the set of local factors, L, the set of non-local factors F , the value

function V , and the number of desired samples N . The output is a set of pairs 〈bL,G〉

of local belief points and associated communication decisions.

Algorithm 1 CreateCommunicationMap(L,F , V,N)

1: {Single_LP (bL, a, a′) refers to (4.14)}

2: {Full_LP (factors, bL, a′) refers to (4.15)}

3: Samples← sample N reachable local belief points bL;

4: bounds← obtain local value bounds of V ; Map← ∅;

5: for all bL ∈ Samples do

6: α′ ← argmaxα Vα(bL);

7: if Vα′(bL) ≥ Vα(bL) ∀α 6= α′ or Single_LP (bL, a, a′) > 0 ∀α 6= α′ then

8: Map←Map∪ 〈bL, ∅〉;

9: else

10: G ← ∅; H ← F/L;

11: while H is not empty do

12: temp← remove factor from H; factors← H∪G;

13: if Full_LP (factors, bL, a′) returns both negative solutions then

14: G ← temp;

15: end if

16: end while

17: Map←Map∪ 〈bL,G〉;

18: end if

19: end for

20: return Map

During execution, an agent updates its local information bL, finds the nearest neigh-

bor point in the communication map, and requests the corresponding factors from the

other agents. The agent then selects the action which exhibits the highest maximum

value bound given the resulting information.

71

Page 86: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

4. EFFICIENT COMMUNICATION IN PARTIALLY OBSERVABLEDOMAINS

R1

R2L2

L1

D

D

1

2

q0

(a) Relay. (b) Relay-Large.

0 0.2 0.4 0.6 0.8 10

1

(bX1)1

Map((b X

1) 1)

(c) Communication Map.

Figure 4.4: (a) Layout of the Relay-Small problem. (b) Layout of the Relay-Large

problem. (c) Communication map for the Relay-Small problem.

4.5 Experiments

An analysis is now given to the results of applying the aforementioned offline communi-

cation mapping process to three different MPOMDP environments, each with a different

degrees of interdependency between agents. The first and smallest of the test problems

is the Relay problem introduced in Section 4.3, and is mainly used for explanatory

purposes. The fact that, in this problem, each belief factor is two-dimensional (each

factor spans one of the rooms) allows the visualization of the results of the proposed

method. In Figure 4.5, it can be seen that some of the agent’s expected behavior is

already contained in the value bounds over its local factor: if an agent is certain of

being in room R1 (i.e. (bX1)1 = 0), then the action with the highest-valued bound is

Shuffle. Likewise, an Exchange should only be carried out when the agent is certain

of being in L1, but it is an ambiguous action since the agent needs to be sure that its

teammate can cooperate. Figure 4.4c represents the communication map which was

obtained offline through the proposed algorithm. Since there are only two factors, the

agent only needs to make a binary decision of whether or not to communicate for a

given local belief point. The belief points considered safe are marked as 0, and those

associated with a communication decision are marked as 1. In terms of quantitative

results, it can be seen that ∼ 30 − 40% of communication episodes are avoided in this

simple example, without a significant loss of collected reward.

Another test scenario is the OneDoor environment of Oliehoek et al. (2007). In this

problem, two agents operate in a 49-state grid-like world, represented in fig. 4.6, and

may each be in one of 7 possible positions. One of the agents is know to be in positions

1, 2 or 3 (with uniform probability) and has the goal of reaching position 5. The other

starts in positions 5, 6 or 7 and must reach position 3. Each agent can move in any of the

72

Page 87: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

4.5 Experiments

0 0.2 0.4 0.6 0.8 120

40

60

80

100

120

140

160Value Bounds (Relay)

0 0.2 0.4 0.6 0.8 120

40

60

80

100

120

140

160Pruned Value Bounds (Relay)

Shuffle

Exchange

Sense

VV

(bX1)1(bX1

)1

Figure 4.5: Value bounds for the Relay-Small problem. The dashed lines indicate the

minimum value bounds, and the filled lines represent the maximum value bounds, for each

action.

four directions, with an associated probability of ending up in an unintended neighbor

state, and can observe positions 2, 4 and 6 with no noise. The remaining positions

all produce the same observation, albeit also deterministically. Therefore |Oi| = 4.

The robots may share the same position, and they receive a penalty for being both in

position 4 at the same time. They receive a positive reward for reaching their goal, and

no reward otherwise. The agents are uncoupled expect through the reward function

(i.e. this is a transition-observation independent version of the problem). Even so, this

means that an acceptable policy in this problem must be such that one of the agents

waits for the other to clear the “door” in position 4 until it attempts to move there.

If a sufficiently large horizon is considered, this problem allows for a significant re-

duction in communication, using the proposed method. This is because a near-optimal

joint policy defines wich agent should take priority, and since that agent always moves

first, it rarely needs to communicate (only when the other agent has a sufficient proba-

Relay-Small OneDoor Relay-Large

h. Full Comm. Red. Comm. Full Comm. Red. Comm. Full Comm. Red. Comm.

6 15.4, 100% 14.8, 56.9% 0.35, 100% 0.30, 89.0% 27.4, 100% 25.8, 44.1%

10 39.8, 100% 38.7, 68.2% 1.47, 100% 1.38, 76.2% -19.7, 100% -21.6, 62,5%

∞ 77.5, 100% 73.9, 46.1% 2.31, 100% 2.02, 61.3% 134.0, 100% 129.7, 58.9%

Table 4.1: Results of the proposed method for various environments. For settings as-

suming full and reduced communication, results show empirical control quality, online

communication usage.

73

Page 88: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

4. EFFICIENT COMMUNICATION IN PARTIALLY OBSERVABLEDOMAINS

Relay-Small OneDoor Relay-Large

h 6 10 ∞ 6 10 ∞ 6 10 ∞

Perseus 1.1 4.3 0.1 7.3 33.3 5.3 239.5 643.0 31.5

Comm. Map 5.9 21.4 7.4 12.4 57.7 5.9 368.7 859.5 138.1

Table 4.2: Running time (in seconds) of the proposed method in comparison to the

Perseus point-based POMDP solver.

A B

CD

1

2

3

4

5

6

7

Figure 4.6: Representation of the OneDoor scenario.

bility of moving to position 4 due to the noise in its actions). The other agent, in turn,

must communicate until its partner clears the door, and afterwards, its local actions

can also be taken independently and so it ceases communication. For horizons smaller

than 10, however, the agents may not have enough decisions left to gather any posi-

tive reward, and in these cases they both communicate in order to avoid any possible

collisions.

Note that this relationship between the problem’s horizon and the amount of com-

munication savings does not hold for all of the problems. The proposed method exploits

the invariance of local policies over subsets of the joint belief space, and this may arbi-

trarily change with the problem’s horizon.

A larger example is displayed in Figure 4.4b. This is an adaptation of the Relay

problem (aptly named Relay-Large) to a setting in which each room has four different

states, and each agent may be carrying a package at a given time. Agent D1 may

retrieve new packages from position L1, and D2 can deliver them to L2, receiving for

that a positive reward. There are a total of 64 possible states for the environment. Here,

since the agents can act independently for a longer time, the communication savings

are more pronounced, as shown in Table 4.1.

Finally, it is here argued that the running time of the proposed algorithm is com-

parable to that of general POMDP solvers for these same environments. Even though

both the solver and the mapper algorithms must be executed in sequence, the results

74

Page 89: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

4.6 Conclusions

in Table 4.2 show that they are typically both in the same order of magnitude.

4.6 Conclusions

Traditional multiagent planning on partially observable environments mostly deals with

fully-communicative or non-communicative situations. For a more realistic scenario

where communication should be used only when necessary, state-of-the-art methods are

only capable of approximating the optimal policy at run-time (Roth et al., 2005a; Wu

et al., 2009). The properties of MPOMDP models which can be exploited in order

to increase the efficiency of communication between agents were here analyzed. It was

shown that these properties hold, for various MPOMDP scenarios, and that the decision

quality can be maintained while significantly reducing the amount of communication,

as long as the dependencies within the model are sparse.

75

Page 90: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

4. EFFICIENT COMMUNICATION IN PARTIALLY OBSERVABLEDOMAINS

76

Page 91: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

Chapter 5

Future Work

This section describes proposed research topics to address the remaining open problems

which hinder the application of DT frameworks to realistic decision-making problems.

In summary, these constitute:

• A method to allow modeling of high-level decision making problems over continu-

ous environments, without sacrificing model accuracy, and retaining the possibility

of reacting to sudden changes;

• An algorithm to adapt POMDP models and their respective solutions, at run-

time, to facilitate their implementation in long-term applications without requir-

ing expert intervention, and to reduce the amount of information regarding the

stochastic properties of the environment.

The scenarios which will be used as experimental environments for the method pro-

posed in this work will be described. These include decision-making for an autonomous

robotic soccer team, which is part of a well-known testbed for cooperative robotics;

and an automated, multiagent surveillance system, to be later deployed in real-world

operation.

Lastly, a schedule for the proposed research activities is presented, along with closing

remarks for this work.

5.1 Event-Driven POMDPs

As it was discussed in Section 3.2, when approximating continuous-state environments,

as is typically the case in robotics tasks, through discrete models, the quality of the

77

Page 92: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

5. FUTURE WORK

=

(a) (b)

Figure 5.1: A robotic navigation task in which actions are assumed to have static termi-

nation conditions, and a metric grid discretization is used for the state space. The amount

of time or distance which is assumed for each action and the resolution of the state space

jointly affect the quality of the resulting policy and the computational complexity of its

calculation. Left: A discretization with |S| = 13, and |π∗| = 8; Right: A discretization

with |S| = 43 and |π∗| = 12.

approximation and the computational complexity of the model depend on the level of

abstraction which is assumed by the problem designer. Often, the same task can be

modeled in different ways, and the deciding factor to the scale of the model is the

available amount of computational power. Another example of this problem is shown

in Figure 5.1 for a conceptual path-planning task over a continuous domain.

From a decision-theoretic perspective, coarser models are more efficient (since they

have fewer states and shorter horizons are required for good policies to be obtained),

but this comes at the cost of undesirable constraints on real-time applications. One of

these constraints is that, when using static terminating conditions for an action, that

action becomes uninterruptable, and therefore the system loses reactivity in higher-level

state/action descriptions, since an agent must wait until those conditions expire before

performing another decision. The other important limitation, inherent to discrete MDPs

and discussed in Section 2.2, is that it is assumed that the time that elapses while the

system remains in a given state does not influence the probability of moving to any

other state. This may affect the Markovian nature of the system.

On the other hand, finer models such as that of 5.1b may provide policies which

describe repetitive actions, and which may seem far removed from the actual decision-

making aspect of the task. When considering even the simple task of moving through

an unocuppied room to a target position, it seems wasteful for an agent to repetitively

decide to move in the same direction, after a periodic amount of space or time. Indeed,

much of the computational time of a planning algorithm can be spent on seemingly

78

Page 93: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

5.1 Event-Driven POMDPs

R1

D1 D2

D3

R2

R3R4

(a)

R1

R2 D2

R3R4

D1

D3

(b)

Figure 5.2: Modeling the navigation task (left) as a Discrete-Event System (Finite State

Automaton, right). In the latter case, nodes correspond to the labeled areas of the envi-

ronment (states), and arcs represent the possible transitions between states.

irrelevant decisions.

Hierarchical models (Oliehoek and Visser, 2006; Pineau and Thrun, 2002; Sutton

et al., 1999; Theocharous et al., 2005) alleviate this problem by considering explicitly

different planning problems at different levels of abstraction. This generally requires

taking into account the possible semi-Markovian nature of the process at higher levels.

However, previous approaches have focused on hierarchical structures in which the most

primitive components are still discrete. This implies that, at this level, the system will

still not be fully reactive, and that more efficient approaches may exist that directly

operate on continuous domains.

The main rationale of this proposed research topic is to make use of extensions to the

MDP framework that explicitly take into account the effect of continuous time (SMDPs,

POSMDPs, and related models), in order to obtain high-level decision theoretic models

which preserve the desired reactive behavior of a robotic task.

Figure 5.2 shows how the earlier path planning task could be modeled through a

high-level discrete event system (a Finite State Automaton). If each of the transitions

represented in 5.2b is allowed to have a characteristic probability distribution over

the instant of its occurrence, this would allow a representation of the model through

an equivalent continuous-time SMDP. Furthermore, if observations are associated with

state transitions, and partial observability of each transition is considered, the system

could still be contained in a POSMDP model. Since each transition here corresponds to

a systemwise “event”, this type of formulation may be seen as an Event-Driven POMDP,

a concept already explored, to a limited depth, in the literature (Rachelson et al., 2008;

Wang and Cao, 2011).

79

Page 94: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

5. FUTURE WORK

Typical hierarchical models consider the observations of the system to be generated

by the lower levels of abstraction, and high-level observations are taken as strings or

combinations of atomic low-level observations. This implies that transitions between

macro-states cannot be directly observed (Theocharous et al., 2005). In this sense,

an Event-Driven POMDP has the advantage of associating transitions between high-

level states with (possibly partially) observable events. Event detectors can be easily

implemented in a robot, given their predicate-based nature.

The main difficulty with this approach is the estimation of the time model, F (refer

to Section 2.2.1), in realistic domains. If the Markov property of the system is to be

preserved, then the model must reflect the fact that the physical configuration of the

agent may considerably affect the time to a high-level state transition. This dependency

cannot be neglected. It is here proposed that planning solutions outside of the decision

theoretic domain may be leveraged in order to provide reliable estimates of the expected

temporal duration of a given task.

Consider, once again, the conceptual robot navigation problem which was here intro-

duced. If the problem was approached as that of classic path planning in a continuous

domain, various existing algorithms in the field of motion planning could be applied in

a straightforward manner, to provide efficient solutions in potentially less time than it

would take to solve an equivalent continuous-state (PO)MDP. Any such algorithm which

could provide an estimated length of the path could then be used as an intermediary

to obtain the time model for a given event. An example of a path planning framework

which is particularly suited to continuous domains is that of Rapidly-Exploring Ran-

dom Trees (RRTs) (Lavalle, 1998). The application of this technique in this navigation

task would result in a structure similar to what is shown in Figure 5.3, which could

be queried to produce approximate path lengths to a desired posture, from which the

travel time could be estimated.

If each state-action pair is then associated with a particular posture in the underlying

configuration space of the robot (i.e. a desired posture for a given task), then the event

probabilities along the path, as well as their time models, could be estimated, given

the expected time and length of that path. This would, however, require an initial

posture to be specified, which may only be known with some associated uncertainty.

A possible solution to this problem would be to take the first moment of an initial

probability distribution in configuration space. However, if the expected position of

80

Page 95: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

5.1 Event-Driven POMDPs

Figure 5.3: The application of a RRT-based algorithm to the previously described navi-

gation task, as an alternative to the discretization of the configuration space of a robotic

agent.

the robot is found to be incorrect at run-time, this would precipitate a re-initialization

of the planning process. A more thorough solution would make use of higher order

moments to provide a more accurate, parametric representation of the estimated time

to the next event.

A so-described Event-Driven POMDP can be solved through any available POMDP

algorithm (see Section 2.3.2). To make use of a path length estimate, as returned by an

RRT-based algorithm, a separate module would need to be used as an interface between

the POMDP solver and the RRT planner. Given a transition tuple 〈s′, s′, a〉, this module

would extract the respective desired posture and supply it alongside the current posture

estimate to the path planner. It would then infer, from the path estimate, the time

and transition models for that particular tuple, so that the process remains transparent

from the perspective of the POMDP solver. The only difference would be that, instead

of having stochastic models represented as conditional probability tables or decision

diagrams, the POMDP solver would get these quantities, for each tuple, indirectly

through other algorithms. This process is depicted in Figure 5.4.

POMDP Solver Event Manager RRT Planner

S,A, O, T,O, R, b0, q0

V, π

s, s′, a, qt

fas′,s, T (s

′, s, a)

qt, qo

Figure 5.4: A tentative framework for an Event-Driven POMDP solver. It would extend a

regular POMDP solver by providing it with dynamic transition probabilities, which would

be supplied by underlying, task specific algorithms. The example considers an RRT planner

for a navigation task.

It follows that the state of the system should be extended with the actual posture of

81

Page 96: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

5. FUTURE WORK

the robot in its configuration space. However, since the POMDP algorithm would only

be operating over a finite set of possible resulting postures (i.e. the expected postures

for each event occurring in a given state), it remains to be seen how could the agent

infer the best action to take, given its particular posture at run-time, and which could

be different from those predicted by the planner. The straightforward approach would

be to implement a nearest-neighbour search in configuration space, although this may

be a lossy approximation.

Note that the use of path planners such as RRTs serves here as an example for the

particular case of navigation tasks. For other types of tasks which may be involved in a

robotics environment (such as grasping an object, sampling a rock, etc.) this algorithm

might not be suitable, and an appropriate task-specific alternative would be used in its

place. This may place undesirable effort on the problem designer, and therefore future

work in this topic will attempt to evaluate different task-specific algorithms for the

most common tasks in the field of cooperative robotics. However, since the particular

low-level algorithm which is used to estimate the length of a task is indifferent to the

POMDP solver, these tasks could still be modeled as discrete MDPs, if that is a viable

option. In this case, the framework would be reduced to a normal hierarchical POMDP

planner.

Another shortcoming of POSMDP based solutions, which was already alluded to

in Section 2.2.2, is that it cannot represent concurrent events in factored models, since

all time models must be relative to the last decision instant, and cannot depend on an

absolute measure of time. A straightfoward solution to this problem would be to extend

the GSMDP framework to partially observable domains. Although this is an apparently

feasible solution, it has not yet been investigated. The addition of partially observable

events raises the issues of possible false-positives (observations signaling events that

didn’t really happen) and false-negatives (events which happened but did not trigger

an observation), which will affect the decision-making process.

In summary, the current open problems with this approach are as follows:

• How to deal with uncertainty in the initial posture of the robot;

• How to include time models which are dependent on the absolute execution time

of the system, rather than on the instant of the last decision.

82

Page 97: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

5.2 Adaptive Solution Methods

• How to extract, at run-time, the optimal action for any given “full state” of the

robot (i.e. its high-level state extended with its posture in configuration space),

or for a probability distribution over said state, given that the task-specific algo-

rithms only consider expected positions.

5.1.1 Extension to Multiagent Systems

Assuming that the inclusion of partial observability in the GSMDP framework is feasible,

then the extension of the aforementioned methods to multiagent scenarios would be

straightforward: concurrent events could be used in order to model the decision-making

process of separate agents (as shown in the work of Younes and Simmons (2004)).

The centralized planner, which would assume free communication in order to reduce

complexity, would reason over all possible events observed by the team, given its joint

action and joint belief state, performing essentially the same planning process as in a

MPOMDP.

This result would be very advantageous in terms of communication and synchroniza-

tion for real-world systems. Indeed, agents would only have to exchange information

upon the occurrence of an event, and even then, if an agent is known to be unaffected

by a particular event (this could be inferred from a graphical, factored description of

the problem), then that agent could withhold communication until its task terminates.

Furthermore, the techniques developed in Chapter 4 would be directly applicable,

since the embedded discrete POMDP would still be operating over joint belief distribu-

tions.

5.2 Adaptive Solution Methods

One of the limitations of DT frameworks which was identified in Section 3.2 is that

model-based approaches are inflexible to change. The sensitivity of the policy of an

agent to changes in the model parameters (transition and observation probabilities, or

rewards) is undefined, and therefore this implies a full restart of the planning process.

In contrast, model-free approaches, such as pure reinforcement learning, may require

many training episodes, not fully obtainable through simulations. Furthermore, as the

size of the problem grows, the amount of required experience may become inpractical.

Their strength lies in their ability to adapt changes in the environment, if they are slower

83

Page 98: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

5. FUTURE WORK

q = {x1, y1, θ1} : o1 : T = t1q = {x2, y2, θ2} : o2 : T = t2q = {x3, y3, θ3} : o3 : T = t3

Figure 5.5: Collecting a history of outcomes of a given task, including initial and final

postures, the resulting event, and elapsed time.

to occur than the convergence rate of the algorithm. A real-world implementation of a

DT framework has much to gain from being adaptable to change, since in that case it

would not require frequent expert intervention.

The advantages of both model-based and reinforcement learning methods have been

integrated by frameworks such as Dyna (Sutton, 1991), which use experience gathered

by the agent to update the probabilistic models of the environment, and in turn, use

the estimated models to learn better actions for hypothetical situations.

This proposed research topic, although still more incipient then that discussed in

the last section, is to explore an approach similar to the Dyna framework to the problem

of planning in multiagent, event-driven systems.

The core idea to this approach is that all of the stochastic models of the system could

be improved, in an event-based description of the system, by maintaining a history of

the outcomes of state-action pairs, including the posture of the agent in its configuration

space, both at the onset and at the end of a task, the resulting observed event, and

the time it took for the that event to take place. This is depicted in Figure 5.5, for a

particular transition in the running example of a navigation task.

With a sufficiently large training set, the estimated stochastic models could prove

more reliable than the prior information provided by the problem designer and the

task-specific estimations which were earlier described.

In fact, if the training set is large enough to cover the relevant areas of the configu-

ration space of the robot, then the time to an event could be estimated based solely on

84

Page 99: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

5.2 Adaptive Solution Methods

New data point

arrives

Re-classify

points

Determine

model for each

class

Find nearest

cluster

Continue to

planner

Calculate new

model for that

class

Has K changed?

Is the fit

appropriate?

Yes

Yes

No

No

Figure 5.6: An algorithm to adapt the stochastic models of a POSMDP, given data

defined over a continuous environment. In this process, K represents the optimal number

of clusters.

past experience. This means that the procedures described in the last section could be

used as a source of prior knowledge, to be later replaced by the learned data.

Since all stochastic models of the system were assumed to depend on the underlying

continuous configuration space of the agent, the process of updating these models,

based a history of past transitions, observations and decision instants, is worth aditional

consideration.

Postures in continuous configuration space would never be exactly repeated, and

therefore, for learning to occur over this data, these postures can be aggregated into a

discrete number of classes. However, it may be detrimental to the planning process, as

it has been previously shown, to rest this discretization on the problem designer. The

same approach can be taken, however, as that which is used to aggregate continuous

sensor data into discrete observations in practical POMDPs (Section 3.2.3): that is,

to apply a clustering algorithm over the data. The only difference is that the optimal

85

Page 100: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

5. FUTURE WORK

Figure 5.7: The SocRob robotic soccer team.

number of clusters is now unknown, and must also be estimated. An alternative would

be to use function approximators directly over the continuous space, as suggested by

Antos et al. (2007).

For parametric models such as F , an algorithm such as the one in Figure 5.6 could

be applied, which would re-estimate the parameter vector for that distribution whenever

the corresponding goodness-of-fit (obtained through a χ2 test, for example) drops below

a pre-defined threshold.

It remains to be seen, however, if the outcome of such an adaptive framework could

still represent a value function as a collection of vectors over joint belief space, so that

the techniques discussed in Chapter 4 could be applied without any loss of generality.

5.3 Experimental Environments

This section describes the experimental testbeds that will be used as case-studies for

the methods discussed in this work.

5.3.1 Robotic Soccer as a Framework for Cooperative Robotics

The rise in popularity of robotic competitions, over the past decades, has led to the ma-

turity of environments such as robotic soccer into reputable case-studies for cooperative

robotics. The RoboCup Middle-Sized League (MSL) class of autonomous soccer robots

are characterized by their use of onboard sensing and decentralized decision making.

This makes the MSL environment a highly dynamic, partially observable domain, in

86

Page 101: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

5.3 Experimental Environments

Figure 5.8: A multiagent surveillance system. Agents can either be stationary (e.g.

cameras) or mobile with the ability of directly affecting the environment (e.g. autonomous

robots). Observations are shared, and decisions are taken in a centralized fashion.

which both reactivity and cooperative planning are of the utmost importance. Explicit

communication is mediated through an offboard wireless access point, which can be

used to share data in real time among all robots. However, communication is not free

from delays or failures, and so its use should be supervised. This domain constitutes,

therefore, a very appropriate environment for the application of the methods which are

proposed in this work.

The SocRob (Society of Robots) project was created at the Institute for Systems

and Robotics, Lisbon, in 1997, with the goal of studying cooperative robotics, and the

domain of robotic soccer quickly became its central case study. The present iteration

of the SocRob team fields five robots equipped with omnidirectional motion and vision.

Managing the complete behavior of a robotic soccer team as a single decision-

theoretic problem is infeasible given the sheer number of game situations which can

occur at any given time. However, modeling each game situation (a foul, a throw-in,

etc.) as an independent decision making problem is a viable approach. These problems

will be modeled as multiagent POSMDPs (or an extension to that framework), using

the developments of this work, in order to demonstrate the applicability of decision-

theoretic methods to fast-paced, realistic robotic problems.

5.3.2 Semi-Autonomous Multiagent Surveillance

One of the most interesting qualities of cooperative multiagent systems, and one which

is potentially applicable to a wide array of situations, lies in their ability to integrate

information collected by different agents in order to improve the overall perception of the

environment. While distributed surveillance systems are ubiquitous in practice, their

87

Page 102: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

5. FUTURE WORK

information is usually processed by human operators, which, for large systems, requires

a dedicated workforce, and may not be completely reliable under some circumstances,

for example in the detection of near-simultaneous events.

The CMU|Portugal MAIS-S (MultiAgent Intelligent Surveillance System) project,

established as a partnership between ISR-Lisbon, CMU, INESC-ID, and the Portuguese

SME Observit, studies the application of AI techniques, and in particular of MDP-based

frameworks, to the problem of multiagent surveillance.

The ultimate goal of the project is to implement a distributed surveillance network

in which each agent is able to detect relevant events in the environment, and to send

this information in a coherent manner to a human operator, while retaining the ability

of operating autonomously, and cooperatively, according to the perceived state of the

environment. Agents are not limited to stationary cameras: other types of agents may

include autonomous robots, fire detectors, and even security guards, which may be

provided instructions automatically, and according to the joint policy of the system.

The experimental testbed for the MAIS-S project is ISRoboNet, a distributed camera-

robot network under operation at ISR-Lisbon. This is a realistically sized environment

which presents the same challenges to DT frameworks as common real-world surveillance

applications.

While other research components of the project will attempt to scale up solution

methods for Dec-POMDPs and their subclasses, the present work will focus on the

implementation side of the project. This will involve the development of realistic model

descriptions, that allow the use of DT techniques without losing accuracy with respect

to the underlying dynamics of the system. One of the fundamental aspects of this

work will be to ensure that the provided models are still solvable by traditional (Dec-

)POMDP methods (i.e. that they are reducible to those frameworks from the planning

perspective), so that the theoretical and practical sides of the project are coherent with

each other.

Upon the completion of the MAIS-S project, if the experimental results show that

the application of these methods to full scale real-world applications is feasible, their

permanent implementation in active surveillance networks will be made possible.

88

Page 103: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

5.4 Research Timeline

uous

Figure 5.9: An overview of a camera network installed at ISR facilities, which will serve

as a testbed for the methods developed in the scope of the MAIS-S project.

5.4 Research Timeline

Table 5.1 shows a proposed timeline for the remaining research activities involved in this

work. Figure 5.10 plots the same information in a Gantt chart for easier visualization.

5.4.1 Risk Management

Although the proposed research topics appear feasible, the following contingencies are

planned (descendingly ordered according to severity):

1. Event-Based POMDPs are impractical/intractable: If, even for a single agent, the

formulation of such a framework is not viable, an expected alternative is to fall

back to traditional Hierarchical POMDPs, which have been extensively described

89

Page 104: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

5. FUTURE WORK

Activity Starting Date Deadline

Hierarchical Event-Driven

POMDPs24/02/12 15/06/12

Adaptive Solution Methods 01/06/12 08/09/12

Implementation of Full-Scale

Experiments24/08/12 31/01/13

Thesis Elaboration 01/02/13 27/03/13

Thesis Defense –/04/13

Table 5.1: Proposed research timeline.

1 Hierarchical Event-Driven POMDPs

2 Adaptive Solution Methods

3 Implementation of Full-Scale Experiments

4 Thesis Elaboration

NameFeb Mar Apr

Qtr 2, 2012

May Jun Jul

Qtr 3, 2012

Aug Sep Oct

Qtr 4, 2012

Nov Dec Jan

Qtr 1, 2013

Feb Mar Apr

Qtr 2, 2013

CTMDP

Figure 5.10: Gantt chart for the proposed timeline.

in related work (Foka and Trahanias, 2007; Pineau and Thrun, 2002; Theocharous

et al., 2005). This would involve considering discrete-time low-level actions.

2. Event-Based POMDPs are not extendable to multiagent domains: A multiagent

Event-Based POMDP is expected to function as a partially observable exten-

sion of a GSMDP. If this is not possible, full observability could be assumed at

the highest levels of abstraction, so that the multiagent problem can indeed be

contained in a regular GSMDP. Existing theory could then be applied to obtain

equivalent CTMDP models (Younes and Simmons, 2004). This would remove

partial observability of events.

3. Adaptive Solution Methods cannot be developed: if including reinforcement learn-

ing in all levels of the hierarchical model is impossible (although exisiting work

suggests otherwise (Sutton, 1991)), then these methods can be restricted to learn

primitive, low level tasks for individual agents, although this would, in principle,

require re-planning at higher levels of abstraction, after changes in the behaviors

associated with these tasks.

90

Page 105: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

5.5 Conclusions

5.5 Conclusions

This main purpose of this work is to bring established methods in the theory of MDPs

closer to real-world applications. By doing so, further interest may be stimulated in the

use of these models, and their theoretical advancement will have stronger motivation.

A review of past real-world implementations of MDP-based methods was presented,

and the limitations of these frameworks which remain open to investigation were iden-

tified through a conceptual simulation.

One of these limitations, that of the cost of obtaining efficient communication poli-

cies, was addressed by exploiting the structure of typical solutions provided by POMDP

methods. Agents were shown to possess the ability to act independently in some situa-

tions, overcoming the need for constant communication and synchronization.

Future research directions were proposed to address the remaining issues with the

implementation of DT frameworks: an extension of traditional hierarchical planners to

domains with underlying continuous states; and the possibility of using reinforcement

learning to adapt the solutions of said planners to domains with slowly changing dy-

namics. These approaches are grounded on existing theory, and they seemingly feasible

given the current state-of-the-art, and the available time which is allocated to each of

them.

The proposed methods will be implemented in two realistic testbeds, one in the

domain of Robotic Soccer, and the other in a distributed sensor-robot network. The

latter scenario is expected to provide a basis for a real-world product.

91

Page 106: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

5. FUTURE WORK

92

Page 107: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

References

J.H. Ahn and J.C. Hornberger. Involving patients in the cadaveric kidney transplant

allocation process: A decision-theoretic perspective. Management Science, pages

629–641, 1996.

A. Antos, R. Munos, C. Szepesvari, et al. Fitted q-iteration in continuous action-space

mdps. In Twenty-First Annual Conference on Neural Information Processing Systems

(NIPS’07), 2007.

T. Arai, E. Pagello, and L.E. Parker. Guest editorial advances in multirobot systems.

Robotics and Automation, IEEE Transactions on, 18(5):655–661, 2002.

Y. Aviv and A. Pazgal. A partially observed markov decision process for dynamic

pricing. Management Science, pages 1400–1416, 2005.

N. Bäuerle and U. Rieder. Markov Decision Processes with applications to finance.

Springer Verlag, 2011.

R. Becker, S. Zilberstein, and C.V. Goldman. Solving transition independent decentral-

ized markov decision processes. In Journal of Artificial Intelligence Research, 2004.

R. Bellman. A markovian decision process. Technical report, DTIC Document, 1957a.

R. Bellman. Dynamic Programming. Princeton University Press, 1957b.

Daniel S. Bernstein, Shlomo Zilberstein, and Neil Immerman. The complexity of decen-

tralized control of markov decision processes. In Mathematics of Operations Research,

page 2002, 2000.

93

Page 108: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

REFERENCES

C. Boutilier. Planning, learning and coordination in multiagent decision processes. In

Proceedings of the 6th conference on Theoretical aspects of rationality and knowledge,

pages 195–210. Morgan Kaufmann Publishers Inc., 1996.

C. Boutilier and D. Poole. Computing optimal policies for partially observable decision

processes using compact representations. In Proceedings of the National Conference

on Artificial Intelligence, pages 1168–1175. Citeseer, 1996.

C. Boutilier, N. Friedman, M. Goldszmidt, and D. Koller. Context-specific independence

in bayesian networks. In Proceedings of the Twelfth Conference on Uncertainty in

Artificial Intelligence, pages 115–123. San Francisco, California, 1996.

Xavier Boyen and Daphne Koller. Tractable inference for complex stochastic processes.

In Proc. of Uncertainty in Artificial Intelligence, 1998.

S.J. Bradtke and M.O. Duff. Reinforcement learning methods for continuous-time

markov decision problems. Advances in neural information processing systems 7,

7:393, 1995.

E. Brunskill, L.P. Kaelbling, T. Lozano-Pérez, and N. Roy. Planning in partially-

observable switching-mode continuous domains. Annals of Mathematics and Artificial

Intelligence, 58(3):185–216, 2010.

J. Capitán, M.T.J. Spaan, L. Merino, and A. Ollero. Decentralized multi-robot coop-

eration with auctioned pomdps. In Sixth Annual Workshop on Multiagent Sequential

Decision Making in Uncertain Domains (MSDM-2011), page 24, 2011.

A. R. Cassandra. Exact and approximate algorithms for partially observable markov

decision processes. PhD thesis, Brown University, Providence, RI, USA, 1998a.

A.R. Cassandra. A survey of pomdp applications. In Working Notes of AAAI 1998 Fall

Symposium on Planning with Partially Observable Markov Decision Processes, pages

17–24, 1998b.

C.G. Cassandras and S. Lafortune. Introduction to discrete event systems, volume 11.

Kluwer academic publishers, 1999.

94

Page 109: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

REFERENCES

H.P. Chao and A.S. Manne. Oil stockpiles and import reductions: A dynamic program-

ming approach. Operations research, pages 632–651, 1983.

H. Cheng. Algorithms for Partially Observable Markov Decision Processes. PhD thesis,

University of British Columbia, 1988.

J.S. de Cani. A dynamic programming algorithm for embedded markov chains when

the planning horizon is at infinity. Management Science, pages 716–733, 1964.

C. Derman, G.J. Lieberman, and S.M. Ross. A stochastic sequential allocation model.

Operations Research, pages 1120–1130, 1975.

S.E. Dreyfus. A note on an industrial replacement process. OR, 8(4):190–193, 1957.

R.O. Duda, P.E. Hart, and D.G. Stork. Pattern classification. John Wiley & Sons,

2001.

J. Duncan and L.S. Scholnick. Interrupt and opportunistic replacement strategies for

systems of deteriorating components. Operational Research Quarterly, pages 271–283,

1973.

F.J. Díez, M.A. Palacios, and M. Arias. Mdps in medicine: Opportunities and chal-

lenges. In Decision Making in Partially Observable, Uncertain Worlds: Exploring

Insights from Multiple Communities, 2011. Workshop at IJCAI11.

J.E. Eckles. Optimum maintenance with incomplete information. Operations Research,

pages 1058–1067, 1968.

H. Ellis, M. Jiang, and R.B. Corotis. Inspection, maintenance, and repair with partial

observability. Journal of Infrastructure Systems, 1(2):92–99, 1995.

T. Fabian, JL Fisher, MW Sasieni, and A. Yardeni. Purchasing raw material on a

fluctuating market. Operations Research, pages 107–122, 1959.

X.G. Fang and G. Havas. On the worst-case complexity of integer gaussian elimina-

tion. In Proceedings of the 1997 international symposium on Symbolic and algebraic

computation, pages 28–31. ACM, 1997.

95

Page 110: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

REFERENCES

E.A. Feinberg and A. Shwartz. Handbook of Markov decision processes: methods and

applications, volume 40. Springer Netherlands, 2002.

S.E. Fleten and T.K. Kristoffersen. Short-term hydropower production planning by

stochastic programming. Computers & Operations Research, 35(8):2656–2671, 2008.

Amalia Foka and Panos Trahanias. Real-time hierarchical POMDPs for autonomous

robot navigation. Robotics and Autonomous Systems, 55(7):561–571, 2007.

P.W. Glynn. A GSMP formalism for discrete event systems. Proceedings of the IEEE,

77(1):14–23, 1989.

Claudia V. Goldman and Shlomo Zilberstein. Decentralized control of cooperative sys-

tems: Categorization and complexity analysis. Journal of Artificial Intelligence Re-

search, 22:143–174, 2004.

R.A. Grupen and J.A. Coelho. Acquiring state from control dynamics to learn grasping

policies for robot hands. Advanced Robotics, 16(5):427–443, 2002.

X. Guo and O. Hernández-Lerma. Continuous-Time Markov Decision Processes: The-

ory and Applications, volume 62. Springer, 2009.

E.A. Hansen and Z. Feng. Dynamic programming for pomdps using a factored state

representation. In Proceedings of the Fifth International Conference on AI Planning

Systems, pages 130–139, 2000.

E.A. Hansen, D.S. Bernstein, and S. Zilberstein. Dynamic programming for partially

observable stochastic games. In Proceedings of the National Conference on Artificial

Intelligence, pages 709–715. Menlo Park, CA; Cambridge, MA; London; AAAI Press;

MIT Press; 1999, 2004.

M. Hauskrecht and H. Fraser. Planning treatment of ischemic heart disease with par-

tially observable markov decision processes. Artificial Intelligence in Medicine, 18(3):

221–244, 2000.

M. Hauskrecht and B. Kveton. Linear program approximations for factored continuous-

state markov decision processes. In In Advances in Neural Information Processing

Systems 16 (NIPS’03), pages 895–902, 2003.

96

Page 111: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

REFERENCES

Jesse Hoey, Robert St-Aubin, Alan Hu, and Craig Boutilier. SPUDD: Stochastic plan-

ning using decision diagrams. In Proc. of Uncertainty in Artificial Intelligence, 1999.

R.A. Howard. Semi-markovian decision processes. Proceedings of International Statis-

tical Inst, 1963.

K. Hsiao, L.P. Kaelbling, and T. Lozano-Perez. Grasping POMDPs. In Robotics and

Automation, 2007 IEEE International Conference on, pages 4685–4692. IEEE, 2007.

D. Hsu, W.S. Lee, and N. Rong. A point-based pomdp planner for target tracking.

In Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on,

pages 2644–2650. IEEE, 2008.

E. Ignall and P. Kolesar. Optimal dispatching of an infinite-capacity shuttle: control at

a single terminal. Operations Research, pages 1008–1024, 1974.

Tommi Jaakkola, Satinder P. Singh, and Michael I. Jordan. Reinforcement learning

algorithm for partially observable markov decision problems. In Advances in Neural

Information Processing Systems 7, pages 345–352. MIT Press, 1995.

A.H. Jazwinski. Stochastic processes and filtering theory, volume 64. Academic Press,

Inc., 1970.

W.S. Jewell. Markov-renewal programming. ii: Infinite return models, example. Oper-

ations Research, pages 949–971, 1963.

L.P. Kaelbling, M.L. Littman, and A.W. Moore. Reinforcement learning: A survey.

Arxiv preprint cs/9605103, 1996.

R.S. Kaplan. Optimal investigation strategies with imperfect information. Journal of

Accounting Research, pages 32–43, 1969.

J. Kober, B. Mohler, and J. Peters. Learning perceptual coupling for motor primi-

tives. In Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International

Conference on, pages 834–839. IEEE, 2008.

S. Koenig and R. Simmons. Xavier: A robot navigation architecture based on par-

tially observable markov decision process models. Artificial Intelligence Based Mobile

Robotics: Case Studies of Successful Robot Systems, pages 91–122, 1998.

97

Page 112: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

REFERENCES

A.R. Kristensen. A survey of markov decision programming techniques applied to the

animal replacement problem. European Review of Agricultural Economics, 21(1):73,

1994.

A. Kumar and S. Zilberstein. Anytime planning for decentralized pomdps using expec-

tation maximization. In Uncertainty in Artificial Intelligence, pages 294–301, 2010.

H. Kurniawati, D. Hsu, and W.S. Lee. SARSOP: Efficient point-based POMDP planning

by approximating optimally reachable belief spaces. In Robotics: Science and Systems,

2008.

D.E. Lane. A partially observable model of decision making by fishermen. Operations

Research, pages 240–254, 1989.

Steven M. Lavalle. Rapidly-exploring random trees: A new tool for path planning.

Technical report, Computer Science Dept., Iowa State University, 1998. TR 98-11.

Z.W. Lim, D. Hsu, and W.S. Lee. Monte carlo value iteration with macro-actions.

Journal of Web Semantics, 2:2, 2004.

J. Lindqvist. Operation of a hydrothermal electric system: A multistage decision pro-

cess. Power Apparatus and Systems, Part III. Transactions of the American Institute

of Electrical Engineers, 81(3):1–6, 1962.

J.D.C. Little. The use of storage water in a hydroelectric system. Journal of the

Operations Research Society of America, pages 187–197, 1955.

D.W. Low. Optimal dynamic pricing policies for an m/m/s queue. Operations Research,

pages 545–561, 1974.

S. Mahadevan and J. Connell. Automatic programming of behavior-based robots using

reinforcement learning. Artificial intelligence, 55(2-3):311–365, 1992.

S. Mahadevan and N. Khaleeli. Robust mobile robot navigation using partially-

observable semi-markov decision processes. In 51] P. Maybeck. Stochastic Models,

Estimation, and Control. Citeseer, 1999.

98

Page 113: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

REFERENCES

M.J. Mataric. Reward functions for accelerated learning. In Proceedings of the Eleventh

International Conference on Machine Learning, volume 189. Morgan Kauffman, New

Brunswick, NJ, 1994.

M.J. Matarić. Reinforcement learning in the multi-robot domain. Autonomous Robots,

4(1):73–83, 1997.

David A. McAllester and Satinder Singh. Approximate planning for factored POMDPs

using belief state simplification. In Proc. of Uncertainty in Artificial Intelligence,

1999.

João V. Messias, Matthijs T. J. Spaan, and Pedro U. Lima. Multi-robot planning under

uncertainty with communication: a case study. In Multi-agent Sequential Decision

Making in Uncertain Domains, 2010. Workshop at AAMAS10.

J.V. Messias, M.T.J. Spaan, and P. U. Lima. Efficient offline communication policies

for factored multiagent POMDPs. In Proceedings of the 25th Annual Conference on

Neural Information Processing Systems (NIPS11), 2011.

S.A. Miller, Z.A. Harris, and E.K.P. Chong. A pomdp framework for coordinated guid-

ance of autonomous uavs for multitarget tracking. EURASIP Journal on Advances

in Signal Processing, 2009:2, 2009.

G.E. Monahan. A survey of partially observable markov decision processes: Theory,

models, and algorithms. Management Science, pages 1–16, 1982.

T. Murata. Petri nets: Properties, analysis and applications. Proceedings of the IEEE,

77(4):541–580, 1989.

K.P. Murphy. Dynamic bayesian networks: representation, inference and learning. PhD

thesis, University of California, 2002.

Ranjit Nair, Pradeep Varakantham, Milind Tambe, and Makoto Yokoo. Networked dis-

tributed POMDPs: A synthesis of distributed constraint optimization and POMDPs.

In Proc. of the National Conference on Artificial Intelligence, 2005.

G. Neto. Planning, Learning and Control Under Uncertainty Based on Discrete Event

Systems and Reinforcement Learning. PhD thesis, Instituto Superior Técnico, Uni-

versidade Ténica de Lisboa, 2010.

99

Page 114: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

REFERENCES

A. Ng, A. Coates, M. Diel, V. Ganapathi, J. Schulte, B. Tse, E. Berger, and E. Liang.

Autonomous inverted helicopter flight via reinforcement learning. Experimental

Robotics IX, pages 363–372, 2006.

B. Ng, C. Meyers, K. Boakye, and J. Nitao. Towards applying interactive pomdps to

real-world adversary modeling. In Twenty-Second IAAI Conference, 2010.

I. Nourbakhsh, R. Powers, and S. Birchfield. Dervish: an office-navigating robot. AI

magazine, 16(2):53, 1995.

M. Ohnishi, H. Kawai, and H. Mine. An optimal inspection and replacement policy

under incomplete state information. European journal of operational research, 27(1):

117–128, 1986.

FA Oliehoek and A. Visser. A hierarchical model for decentralized fighting of large

scale urban fires. In AAMAS’06 Workshop on Hierarchical Autonomous Agents and

Multi-Agent Systems, pages 14–21, 2006.

F.A. Oliehoek, M.T.J. Spaan, and N. Vlassis. Optimal and approximate q-value func-

tions for decentralized pomdps. Journal of Artificial Intelligence Research, 32(1):

289–353, 2008a.

Frans A. Oliehoek. Factored Dec-POMDPs: exploiting locality of interaction. In Value-

based planning for teams of agents in stochastic partially observable environments,

chapter 2. PhD thesis Frans A. Oliehoek, University of Amsterdam, 2010.

Frans A. Oliehoek, Matthijs T. J. Spaan, and Nikos Vlassis. Dec-POMDPs with delayed

communication. In Multi-agent Sequential Decision Making in Uncertain Domains,

2007. Workshop at AAMAS07.

Frans A. Oliehoek, Matthijs T. J. Spaan, Shimon Whiteson, and Nikos Vlassis. Ex-

ploiting locality of interaction in factored Dec-POMDPs. In Proc. of Int. Conference

on Autonomous Agents and Multi Agent Systems, 2008b.

D.W. Onstad and R. Rabbinge. Dynamic programming and the computation of eco-

nomic injury levels for crop disease control. Agricultural Systems, 18(4):207–226,

1985.

100

Page 115: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

REFERENCES

J. Pajarinen and J. Peltonen. Efficient planning for factored infinite-horizon dec-

pomdps. In Proc. of the International Joint Conference on Artificial Intelligence,(to

appear), 2011.

C.H. Papadimitriou and J.N. Tsitsiklis. The complexity of markov decision processes.

Mathematics of operations research, pages 441–450, 1987.

S. Paquet, L. Tobin, and B. Chaib-Draa. An online pomdp algorithm for complex

multiagent environments. In Proceedings of the fourth international joint conference

on Autonomous agents and multiagent systems, pages 970–977. ACM, 2005.

R.E. Parr. Hierarchical control and learning for Markov decision processes. PhD thesis,

Citeseer, 1998.

Relu Patrascu, Pascal Poupart, Dale Schuurmans, Craig Boutilier, and Carlos Guestrin.

Greedy linear value-approximation for factored markov decision processes. In Eigh-

teenth national conference on Artificial intelligence, pages 285–291, Menlo Park, CA,

USA, 2002. American Association for Artificial Intelligence. ISBN 0-262-51129-0.

J. Pineau and S. Thrun. An integrated approach to hierarchy and abstraction for

POMDPs. Technical Report CMU-RI-TR-02-21, Robotics Institute, Carnegie Mellon

University, 2002.

J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: An anytime algorithm

for POMDPs. In International Joint Conference on Artificial Intelligence, volume 18,

pages 1025–1032. Citeseer, 2003.

J.M. Porta, N. Vlassis, M.T.J. Spaan, and P. Poupart. Point-based value iteration for

continuous pomdps. The Journal of Machine Learning Research, 7:2329–2367, 2006.

P. Poupart and C. Boutilier. Value-directed belief state approximation for POMDPs.

In Proc. of Uncertainty in Artificial Intelligence, volume 130, 2000.

Pascal Poupart. Exploiting Structure to Efficiently Solve Large Scale Partially Observ-

able Markov Decision Processes. PhD thesis, University of Toronto, 2005.

G. Pritchard, AB Philpott, and PJ Neame. Hydroelectric reservoir optimization in a

pool market. Mathematical programming, 103(3):445–461, 2005.

101

Page 116: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

REFERENCES

M.L. Puterman. Markov decision processes: Discrete stochastic dynamic programming.

John Wiley & Sons, Inc., 1994.

David V. Pynadath and Milind Tambe. The communicative multiagent team decision

problem: Analyzing teamwork theories and models. Journal of Artificial Intelligence

Research, 16:389–423, 2002.

E. Rachelson, G. Quesnel, F. Garcia, and P. Fabiani. Approximate policy iteration

for generalized semi-markov decision processes: an improved algorithm. In European

Workshop on Reinforcement Learning, 2008.

M. Riedmiller, T. Gabel, R. Hafner, and S. Lange. Reinforcement learning for robot

soccer. Autonomous Robots, 27(1):55–73, 2009.

P. Rong and M. Pedram. Extending the lifetime of a network of battery-powered mobile

devices by remote processing: a markovian decision-based approach. In Proceedings

of the 40th annual Design Automation Conference, pages 906–911. ACM, 2003.

M. Roth, R. Simmons, and M. Veloso. Decentralized communication strategies for

coordinated multi-agent policies. In Multi-Robot Systems: From Swarms to Intelligent

Automata, volume IV. Kluwer Academic Publishers, 2005a.

M. Roth, R. Simmons, and M. Veloso. Reasoning about joint beliefs for execution-time

communication decisions. In Proc. of Int. Conference on Autonomous Agents and

Multi Agent Systems, pages 786–793. ACM, 2005b.

Maayan Roth, Reid Simmons, and Manuela Veloso. Exploiting factored representations

for decentralized execution in multi-agent teams. In Proc. of Int. Conference on

Autonomous Agents and Multi Agent Systems, 2007.

A.J. Schaefer, M.D. Bailey, S.M. Shechter, and M.S. Roberts. Modeling medical treat-

ment using markov decision processes. Operations Research and Health Care, pages

593–612, 2005.

S. Seuken and S. Zilberstein. Memory-bounded dynamic programming for dec-pomdps.

In International Joint Conference on Artificial Intelligence, pages 2009–2016, 2007.

102

Page 117: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

REFERENCES

S.M. Shechter, M.D. Bailey, A.J. Schaefer, and M.S. Roberts. The optimal time to

initiate hiv therapy under ordered health states. Operations Research, 56(1):20, 2008.

R. Simmons and S. Koenig. Probabilistic robot navigation in partially observable en-

vironments. In International Joint Conference on Artificial Intelligence, volume 14,

pages 1080–1087. LAWRENCE ERLBAUM ASSOCIATES LTD, 1995.

E.J. Sondik. The optimal control of partially observable markov processes. PhD thesis,

Stanford University, 1971.

F.A. Sonnenberg and J.R. Beck. Markov models in medical decision making. Medical

decision making, 13(4):322, 1993.

Matthijs Spaan and Frans Groen. Team coordination among robotic soccer players. In

Gal Kaminka, Pedro Lima, and Raúl Rojas, editors, RoboCup 2002: Robot Soccer

World Cup VI, volume 2752 of Lecture Notes in Computer Science, pages 409–416.

Springer Berlin / Heidelberg, 2003.

Matthijs T. J. Spaan, Frans A. Oliehoek, and Nikos Vlassis. Multiagent planning

under uncertainty with stochastic communication delays. In Proc. of Int. Conf. on

Automated Planning and Scheduling, pages 338–345, 2008.

M.T.J. Spaan and N. Vlassis. A point-based pomdp algorithm for robot planning.

In Robotics and Automation, 2004. Proceedings. ICRA’04. 2004 IEEE International

Conference on, volume 3, pages 2399–2404. IEEE, 2004.

M.T.J. Spaan and N. Vlassis. Perseus: Randomized point-based value iteration for

POMDPs. Journal of Artificial Intelligence Research, 24(1):195–220, 2005.

M.T.J. Spaan, G.J. Gordon, and N. Vlassis. Decentralized planning under uncertainty

for teams of communicating agents. In Proceedings of the fifth international joint

conference on Autonomous agents and multiagent systems, pages 249–256. ACM,

2006.

M.T.J. Spaan, F.A. Oliehoek, and C. Amato. Scaling up optimal heuristic search in

dec-pomdps via incremental expansion. In Proc. of International Joint Conference

on Artificial Intelligence, pages 2027–2032, 2011.

103

Page 118: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

REFERENCES

Mohan Sridharan, Jeremy Wyatt, and Richard Dearden. Planning to see: A hierarchical

approach to planning visual actions on a robot using POMDPs. Artificial Intelligence,

174:704–725, 2010.

S. Stidham and R. Weber. A survey of markov decision models for control of networks

of queues. Queueing Systems, 13(1):291–314, 1993.

Richard Stuart Sutton. Temporal credit assignment in reinforcement learning. PhD

thesis, University of Massachusetts Amherst, 1984. AAI8410337.

R.S. Sutton. Dyna, an integrated architecture for learning, planning, and reacting.

ACM SIGART Bulletin, 2(4):160–163, 1991.

R.S. Sutton and A.G. Barto. Reinforcement learning: An introduction, volume 28.

Cambridge Univ Press, 1998.

R.S. Sutton, D. Precup, and S. Singh. Between mdps and semi-mdps: A framework

for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1):

181–211, 1999.

D. Szer and F. Charpillet. An optimal best-first search algorithm for solving infinite

horizon dec-pomdps. Machine Learning: ECML 2005, pages 389–399, 2005.

N. Tao, J. Baxter, and L. Weaver. A multi-agent, policy-gradient approach to network

routing. In In: Proc. of the 18th Int. Conf. on Machine Learning, pages 553–560.

Morgan Kaufmann, 2001.

G. Tesauro. Practical issues in temporal difference learning. Machine learning, 8(3):

257–277, 1992.

G. Theocharous and S. Mahadevan. Approximate planning with hierarchical partially

observable Markov decision processes for robot navigation. In Proceedings of the IEEE

International Conference on Robotics and Automation, 2002.

G. Theocharous, S. Mahadevan, and L.P. Kaelbling. Spatial and temporal abstractions

in pomdps applied to robot navigation. Technical Report MIT-CSAIL-TR-2005-058,

Computer Science and Artificial Inteligence Laboratory, MIT, 2005.

104

Page 119: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

REFERENCES

S. Thrun. Learning to play the game of chess. Advances in Neural Information Pro-

cessing Systems, pages 1069–1076, 1995.

S. Thrun. Monte carlo POMDPs. Advances in neural information processing systems,

12:1064–1070, 2000.

J.N. Tsitsiklis and B. Van Roy. Feature-based methods for large scale dynamic pro-

gramming. Machine Learning, 22(1):59–94, 1996.

D. Wang and B.J. Adams. Optimization of real-time reservoir operations with markov

decision processes. Water Resources Research, 22(3):345–352, 1986.

D.X. Wang and X.R. Cao. Event-based optimization for pomdps and its application

in portfolio management. In Proceedings of the 18th IFAC World Congress, pages

3228–3233, 2011.

C.J.C.H. Watkins. Learning from delayed rewards. PhD thesis, King’s College, Cam-

bridge, 1989.

C.C. White. Procedures for the solution of a finite-horizon, partially observed, semi-

markov optimization problem. Operations Research, pages 348–358, 1976.

C.C. White, E.C. Wilson, and A.C. Weaver. Decision aid development for use in am-

bulatory health care settings. Operations Research, pages 446–463, 1982.

Chelsea C. White. Partially observed Markov decision processes: a survey. Annals of

Operations Research, 32, 1991.

D.J. White. Real applications of markov decision processes. Interfaces, pages 73–83,

1985.

D.J. White. Further real applications of markov decision processes. Interfaces, pages

55–61, 1988.

DJ White. A survey of applications of markov decision processes. Journal of the

Operational Research Society, pages 1073–1096, 1993.

D.J. White and J.M. Norman. Control of cash reserves. OR, pages 309–328, 1965.

105

Page 120: Thesis Proposal: Decentralized Decision-Making for Real ...users.isr.ist.utl.pt/~jmessias/content/jmessias-thesis-preliminary.pdf · 1.1 Motivation: Planning Under Uncertainty in

REFERENCES

J.D. Williams and S. Young. Partially observable markov decision processes for spoken

dialog systems. Computer Speech & Language, 21(2):393–422, 2007.

I.H. Witten. An adaptive optimal controller for discrete-time markov environments.

Information and Control, 34(4):286–295, 1977.

S.J. Witwicki. Abstracting Influences for Efficient Multiagent Coordination Under Un-

certainty. PhD thesis, University of Michigan, 2012.

F. Wu and X. Chen. Solving large-scale and sparse-reward dec-pomdps with correlation-

mdps. In Ubbo Visser, Fernando Ribeiro, Takeshi Ohashi, and Frank Dellaert, edi-

tors, RoboCup 2007: Robot Soccer World Cup XI, volume 5001 of Lecture Notes in

Computer Science, pages 208–219. Springer Berlin / Heidelberg, 2008.

Feng Wu, Shlomo Zilberstein, and Xiaoping Chen. Multi-agent online planning with

communication. In Int. Conf. on Automated Planning and Scheduling, 2009.

H.L.S. Younes and R.G. Simmons. Solving generalized semi-markov decision processes

using continuous phase-type distributions. In Proceedings of the National Conference

on Artificial Intelligence, pages 742–748. Menlo Park, CA; Cambridge, MA; London;

AAAI Press; MIT Press; 1999, 2004.

S. Young, M. Gasic, S. Keizer, F. Mairesse, J. Schatzmann, B. Thomson, and K. Yu.

The hidden information state model: A practical framework for pomdp-based spoken

dialogue management. Computer Speech & Language, 24(2):150–174, 2010.

E. Zhou, M.C. Fu, and S.I. Marcus. A density projection approach to dimension reduc-

tion for continuous-state pomdps. In Decision and Control, 2008. CDC 2008. 47th

IEEE Conference on, pages 5576–5581. IEEE, 2008.

106