Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic...

45
Partially Observable Markov Decision Processes Chapter 17 Mausam (Based on slides by Dieter Fox, Lee Wee Sun)

Transcript of Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic...

Page 1: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Partially Observable

Markov Decision ProcessesChapter 17

Mausam

(Based on slides by Dieter Fox,

Lee Wee Sun)

Page 2: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Stochastic Planning: MDPs

What action

next?

Percepts Actions

Environment

Static

Fully

Observable

Perfect

Stochastic

Instantaneous

2

Page 3: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Partially Observable MDPs

What action

next?

Percepts Actions

Environment

Static

Partially

Observable

Noisy

Stochastic

Instantaneous

3

Page 4: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Stochastic, Fully Observable

4

Page 5: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Stochastic, Partially Observable

5

Page 6: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

POMDPs

In POMDPs we apply the very same idea as in MDPs.

Since the state is not observable,

the agent has to make its decisions based on the belief state

which is a posterior distribution over states.

Let b be the belief of the agent about the current state

POMDPs compute a value function over belief space:

γa b, aa

6

Page 7: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

POMDPs

Each belief is a probability distribution,

• value fn is a function of an entire probability distribution.

Problematic, since probability distributions are continuous.

Also, we have to deal with huge complexity of belief spaces.

For finite worlds with finite state, action, and observation

spaces and finite horizons,

• we can represent the value functions by piecewise linear

functions.

7

Page 8: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Applications

Robotic control

• helicopter maneuvering, autonomous vehicles

• Mars rover - path planning, oversubscription planning

• elevator planning

Game playing - backgammon, tetris, checkers

Neuroscience

Computational Finance, Sequential Auctions

Assisting elderly in simple tasks

Spoken dialog management

Communication Networks – switching, routing, flow control

War planning, evacuation planning

8

Page 9: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Dialog Systems

Aim: Find out

what the person

wants

Actions: Ask

appropriate

questions

Observations:

Output of speech

recognizer

Williams & Young, 2007

Page 10: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Assistive Technology

Aim: Assist person with dementia in handwashing

Actions: Prompt the person with suggestions when appropriate

Observations: Video of activity

POMDP Hoey, Von Bertoldi, Poupart, Mihailidis 2007

Page 11: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Aircraft Collision Avoidance System

Image from

http://web.mit.edu/temizer/www/selim/

• Aim: Avoid collision

with nearby aircraft

• Actions: Maneuver

the UAV

• Observations:

Limited view sensors

Temitzer, Kochenderfer, Kaelbling, Lozano-Perez, Kuchar 2010

Bai, Hsu, Lee, Kochenderfer 2011

Page 12: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Commonalities in the examples

• Need to learn, estimate or track the current state of

the system from history of actions and

observations

• Based on current state estimate, select an action

that leads to good outcomes, not just currently but

also in future (planning)

POMDP

Page 13: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Powerful but Intractable

Partially Observable Markov Decision Process (POMDP)

is a very powerful modeling tool

But with great power

… comes great intractability!

No known way to solve

it quickly

No small policy

Image from http://ocw.mit.edu/courses/mathematics/18-405j-advanced-complexity-theory-fall-2001/

Page 14: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

assistance

State Space Size

Moderate state space size

Exact belief and policy

evaluation

Very large, continuous state spaces

Approximate belief and policy

evaluation

tracking navigation

simple

grasping

dialogcollision avoidance

Page 15: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Partially Observable Markov Decision Process (POMDP) <S,A,T,R,Ω,O>

Observations Ω : Set of possible observations

• Navigation example:

• Set of locations output by GPS sensor

POMDP

Partially Observable

Markov Decision Process

Robot navigation

Page 16: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

POMDP <S,A,T,R,Ω,O>

Observation probability

function O : Probability of

observing o in state s’ when previous action is a

• O(o, a, s’) = Pr(o | a, s’)

• Navigation example:

• Darker shade, higher

probability

POMDP

Robot navigation

Page 17: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

POMDP <S,A,T,R,Ω,O>

Belief b : Probability of

state s

• b(s) = Pr(s)

• Navigation example:

• Exact position of robot

unknown

• Only have probability

distribution of positions,

obtained through sensor

readings

POMDP

Robot navigation

Belief

Page 18: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

POMDP

<S,A,T,R,Ω,O>

Policy π : Function

from belief to action

• a = π(b)

• Navigation example:

• Which way to move,

based on current belief

POMDP

Robot navigation

POMDP Policy

Page 19: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

POMDP <S,A,T,R,Ω,O>

R(a,b) : Expected reward

for taking action a when

belief is b

Optimal Policy π* :

Function π that maximizes

POMDP

Robot navigation

Page 20: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

POMDP <S,A,T,R,Ω,O>

Value function for π : Expected

return for starting from belief b

Optimal value function V* : Value function associated with

an optimal policy π*

POMDP

Robot navigation

Value Function

Page 21: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

An Illustrative Example

2x1x 3u

8.0

2z

1z

3u

2.0

8.0

2.0

7.0

3.0

3.0

7.0

measurements action u3 state x2

payoff

measurements

1u 2u1u 2u

100 50100 100

actions u1, u2

payoff

state x1

1z

2z

21

Page 22: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

The Parameters of the Example

The actions u1 and u2 are terminal actions.

The action u3 is a sensing action that potentially leads to a

state transition.

The horizon is finite and =1.

22

Page 23: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Payoff in POMDPs

In MDPs, the payoff (or return) depended on the state of the system.

In POMDPs, however, the true state is not exactly known.

Therefore, we compute the expected payoff by integrating over all states:

23

Page 24: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Payoffs in Our Example (1)

If we are totally certain that we are in state x1 and execute

action u1, we receive a reward of -100

If, on the other hand, we definitely know that we are in x2

and execute u1, the reward is +100.

In between it is the linear combination of the extreme

values weighted by the probabilities

24

Page 25: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Payoffs in Our Example (2)

25

Page 26: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

The Resulting Policy for T=1

Given we have a finite POMDP with T=1, we would use V1(b) to determine the optimal policy.

In our example, the optimal policy for T=1 is

This is the upper thick graph in the diagram.

26

Page 27: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Piecewise Linearity, Convexity

The resulting value function V1(b) is the maximum

of the three functions at each point

It is piecewise linear and convex.

27

Page 28: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Pruning

If we carefully consider V1(b), we see that only the

first two components contribute.

The third component can therefore safely be

pruned away from V1(b).

28

Page 29: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Increasing the Time Horizon

Assume the robot can make an observation before deciding on an action.

V1(b)29

Page 30: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Increasing the Time Horizon

Assume the robot can make an observation before deciding on an action.

Suppose the robot perceives z1 for which p(z1 | x1)=0.7 and p(z1| x2)=0.3.

Given the observation z1 we update the belief using Bayes rule.

3.04.0)1(3.07.0)(

)(

)1(3.0'

)(

7.0'

1111

1

12

1

11

pppzp

zp

pp

zp

pp

30

Page 31: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Increasing the Time Horizon

Assume the robot can make an observation before deciding on an action.

Suppose the robot perceives z1 for which p(z1 | x1)=0.7 and p(z1| x2)=0.3.

Given the observation z1 we update the belief using Bayes rule.

Thus V1(b | z1) is given by

32

Page 32: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Expected Value after Measuring

Since we do not know in advance what the next

measurement will be, we have to compute the

expected belief

2

1

111

2

1

111

2

1

111

)|(

)(

)|()(

)|()()]|([)(

i

i

i i

ii

i

iiz

pxzpV

zp

pxzpVzp

zbVzpzbVEbV

33

Page 33: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Expected Value after Measuring

Since we do not know in advance what the next

measurement will be, we have to compute the

expected belief

34

Page 34: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Resulting Value Function

The four possible combinations yield the following function which then can be simplified and pruned.

35

Page 35: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Value Function

b’(b|z1)

p(z1) V1(b|z1)

p(z2) V2(b|z2)

\bar{V}1(b)

36

Page 36: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

State Transitions (Prediction)

When the agent selects u3 its state potentially

changes.

When computing the value function, we have

to take these potential state changes into

account.

37

Page 37: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Resulting Value Function after executing u3

Taking the state transitions into account, we

finally obtain.

38

Page 38: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Value Function after executing u3

\bar{V}1(b)

\bar{V}1(b|u3)39

Page 39: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Value Function for T=2

Taking into account that the agent can either

directly perform u1 or u2 or first u3 and then u1

or u2, we obtain (after pruning)

40

Page 40: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Graphical Representation

of V2(b)

u1 optimal u2 optimal

unclear

outcome of measuring is important here

41

Page 41: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Deep Horizons and Pruning

We have now completed a full backup in belief

space.

This process can be applied recursively.

The value functions for T=10 and T=20 are

42

Page 42: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Deep Horizons and Pruning

43

Page 43: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

44

Page 44: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

Why Pruning is Essential

Each update introduces additional linear components to V.

Each measurement squares the number of linear

components.

Thus, an unpruned value function for T=20 includes more

than 10547,864 linear functions.

At T=30 we have 10561,012,337 linear functions.

The pruned value functions at T=20, in comparison,

contains only 12 linear components.

The combinatorial explosion of linear components in the

value function are the major reason why POMDPs are

impractical for most applications.

45

Page 45: Partially Observable Markov Decision Processes · 2018-09-27 · partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise

POMDP Summary

POMDPs compute the optimal action in partially observable, stochastic domains.

For finite horizon problems, the resulting value functions are piecewise linear and convex.

In each iteration the number of linear constraints grows exponentially.

POMDPs so far have only been applied successfully to very small state spaces with small numbers of possible observations and actions.

46