Markov Decision Processes Basics Concepts

33
1 Markov Decision Processes Basics Concepts Alan Fern

description

Markov Decision Processes Basics Concepts. Alan Fern. Some AI Planning Problems. Solitaire. Fire & Rescue Response Planning. Real-Time Strategy Games. Legged Robot Control. Network Security/Control. Helicopter Control. Some AI Planning Problems. Health Care - PowerPoint PPT Presentation

Transcript of Markov Decision Processes Basics Concepts

Page 1: Markov Decision Processes Basics Concepts

1

Markov Decision ProcessesBasics Concepts

Alan Fern

Page 2: Markov Decision Processes Basics Concepts

Some AI Planning Problems

Fire & RescueResponse Planning

Solitaire Real-Time Strategy Games

Helicopter Control Legged Robot Control Network Security/Control

Page 3: Markov Decision Processes Basics Concepts

3

Some AI Planning Problemsh Health Care

5 Personalized treatment planning5 Hospital Logistics/Scheduling

h Transportation5 Autonomous Vehicles5 Supply Chain Logistics5 Air traffic control

h Assistive Technologies5 Dialog Management5 Automated assistants for elderly/disabled5 Household robots

h Sustainability5 Smart grid5 Forest fire management

…..

Page 4: Markov Decision Processes Basics Concepts

4

Common Elementsh We have a systems that changes state over time

h Can (partially) control the system state transitions by taking actions

h Problem gives an objective that specifies which states (or state sequences) are more/less preferred

h Problem: At each moment must select an action to optimize the overall (long-term) objective 5 Produce most preferred state sequences

Page 5: Markov Decision Processes Basics Concepts

5

Observations Actions

????

world/system

State of world/system

Observe-Act Loop of AI Planning Agent

action

Goal maximize expected reward over lifetime

Page 6: Markov Decision Processes Basics Concepts

6

World StateAction from finite set????

Stochastic/Probabilistic Planning: Markov Decision Process (MDP) Model

Goal maximize expected reward over lifetime

Markov DecisionProcess

Page 7: Markov Decision Processes Basics Concepts

7

State describesall visible infoabout game

Action are the different choices of dice to roll(or to select a category to score)

????

Example MDP

Goal maximize score atend of game

Page 8: Markov Decision Processes Basics Concepts

8

Markov Decision Processesh An MDP has four components: S, A, R, T:

5 finite state set S 5 finite action set A

5 transition function T(s,a,s’) = Pr(s’ | s,a)g Probability of going to state s’ after taking action a in state s

5 bounded, real-valued reward function R(s,a)g Immediate reward we get for being in state s and taking action ag Roughly speaking the objective is to select actions in order to

maximize total reward over timeg For example in a goal-based domain R(s,a) may equal 1 for goal

states and 0 for all others (or -1 reward for non-goal states)

Page 9: Markov Decision Processes Basics Concepts

Roll(die1,die2)

State

Roll(die1) Roll(die1,die2,die3). . .

Actions

Page 10: Markov Decision Processes Basics Concepts

Roll(die1,die2)

1,1 1,2 6,6Probabilisticstate transition

State Reward: only get rewardfor “category selection” actions. Reward equal to points gained.

Page 11: Markov Decision Processes Basics Concepts

11

What is a solution to an MDP? MDP Planning Problem: Input: an MDP (S,A,R,T) Output: ????

h Should the solution to an MDP be just a sequence of actions such as (a1,a2,a3, ….) ? 5 Consider a single player card game like Blackjack/Solitaire.

h No! In general an action sequence is not sufficient5 Actions have stochastic effects, so the state we end up in is

uncertain5 This means that we might end up in states where the remainder

of the action sequence doesn’t apply or is a bad choice5 A solution should tell us what the best action is for any possible

situation/state that might arise

Page 12: Markov Decision Processes Basics Concepts

12

Policies (“plans” for MDPs)h For this class we will assume that we are given a finite

planning horizon H5 I.e. we are told how many actions we will be allowed to take

h A solution to an MDP is a policy that says what to do at any moment

h Policies are functions from states and times to actions5 π:S x T → A, where T is the non-negative integers 5 π(s,t) tells us what action to take at state s when there are t

stages-to-go 5 A policy that does not depend on t is called stationary, otherwise

it is called non-stationary

Page 13: Markov Decision Processes Basics Concepts

13

What is a solution to an MDP? MDP Planning Problem: Input: an MDP (S,A,R,T) Output: a policy such that ????

h We don’t want to output just any policy

h We want to output a “good” policy

h One that accumulates lots of reward

Page 14: Markov Decision Processes Basics Concepts

14

Value of a Policyh How to quantify accumulated reward of policy π? h Every policy has a value function that indicates how

good it is across states and time

h is the h-horizon value of policy in state s5 Equals the expected total reward of starting in s and

executing for h steps

5 is a random variable denoting reward at time t when following starting in s

],|[)(0

sREsVh

t

th

Page 15: Markov Decision Processes Basics Concepts

15

Page 16: Markov Decision Processes Basics Concepts

16

What is a solution to an MDP? MDP Planning Problem: Input: an MDP (S,A,R,T) Output: a policy that achieves an “optimal value function”

h is optimal if, for all states s, is no worse than value of any other policy at s

Theorem: For every MDP there exists an optimal policy.(perhaps not unique)

h This says that there is one policy that is uniformly as good as any other policy across the entire state space.

Page 17: Markov Decision Processes Basics Concepts

17

Computational Problemsh There are two problems that are of typical interest

h Policy evaluation: 5 Given an MDP and a policy π5 Compute finite-horizon value function for any h and s

h Policy optimization: 5 Given an MDP and a horizon H5 Compute the optimal finite-horizon policy

h How many finite horizon policies are there? 5 5 So can’t just enumerate policies for efficient optimization

)(sV h

Page 18: Markov Decision Processes Basics Concepts

18

Computational Problemsh Dynamic programming techniques can be used for both

policy evaluation and optimization5 Polynomial time in # of states and actions5 http://web.engr.oregonstate.edu/~afern/classes/cs533/

h Is polytime in # of states and actions good?

h Not when these numbers are enormous!5 As is the case for most realistic applications5 Consider Klondike Solitaire, Computer Network Control,

etc

Enters Monte-Carlo Planning

Page 19: Markov Decision Processes Basics Concepts

19

Approaches for Large Worlds: Monte-Carlo Planning

h Often a simulator of a planning domain is availableor can be learned from data

19

Klondike SolitaireFire & Emergency Response

Page 20: Markov Decision Processes Basics Concepts

20

Large Worlds: Monte-Carlo Approachh Often a simulator of a planning domain is available

or can be learned from data

h Monte-Carlo Planning: compute a good policy for an MDP by interacting with an MDP simulator

20

World Simulato

r RealWorld

action

State + reward

Page 21: Markov Decision Processes Basics Concepts

21

Example Domains with Simulatorsh Traffic simulators

h Robotics simulators

h Military campaign simulators

h Computer network simulators

h Emergency planning simulators 5 large-scale disaster and municipal

h Sports domains

h Board games / Video games5 Go / RTS

In many cases Monte-Carlo techniques yield state-of-the-artperformance.

Page 22: Markov Decision Processes Basics Concepts

22

MDP: Simulation-Based Representationh A simulation-based representation gives: S, A, R, T, I:

5 finite state set S (|S| is generally very large)5 finite action set A (|A|=m and will assume is of reasonable size)

5 Stochastic, real-valued, bounded reward function R(s,a) = rg Stochastically returns a reward r given input s and a

5 Stochastic transition function T(s,a) = s’ (i.e. a simulator)g Stochastically returns a state s’ given input s and ag Probability of returning s’ is dictated by Pr(s’ | s,a) of MDP

5 Stochastic initial state function I. g Stochastically returns a state according to an initial state distribution

These stochastic functions can be implemented in any language!

Page 23: Markov Decision Processes Basics Concepts

23

Computational Problems

h Policy evaluation: 5 Given an MDP simulator, a policy , a state s, and horizon h5 Compute finite-horizon value function

h Policy optimization: 5 Given an MDP simulator, a horizon H, and a state s5 Compute the optimal finite-horizon policy at state s

Page 24: Markov Decision Processes Basics Concepts

24

Trajectoriesh We can use the simulator to observe trajectories of

any policy π from any state s:h Let Traj(s, π , h) be a stochastic function that

returns a length h trajectory of π starting at s. h Traj(s, π , h)

5 s0 = s 5 For i = 1 to h-1

g si = T(si-1, π(si-1))5 Return s0, s1, …, sh-1

h The total reward of a trajectory is given by

1

010 )(),...,(

h

iih sRssR

Page 25: Markov Decision Processes Basics Concepts

25

Policy Evaluation

h Given a policy π and a state s, how can we estimate ? h Simply sample w trajectories starting at s and average the

total reward receivedh Select a sampling width w and horizon h.h The function EstimateV(s, π , h, w) returns estimate of Vπ(s) EstimateV(s, π , h, w)

5 V = 0 5 For i = 1 to w

g V = V + R( Traj(s, π , h) ) ; add total reward of a trajectory5 Return V / w

h How close to true value will this estimate be?

Page 26: Markov Decision Processes Basics Concepts

26

Sampling-Error Bound

))],,(([],|[)(0

hsTrajREsREsVh

t

th

approximation due to sampling

• Note that the ri are samples of random variable R(Traj(s, π , h))

• We can apply the additive Chernoff bound which bounds the difference between an expectation and an emprical average

)),,((,),,,(1

1 hsTrajRrrwhsEstimateV i

w

iiw

Page 27: Markov Decision Processes Basics Concepts

27

Aside: Additive Chernoff Bound• Let X be a random variable with maximum absolute value Z.

An let xi i=1,…,w be i.i.d. samples of X

• The Chernoff bound gives a bound on the probability that the average of the xi are far from E[X]

11

1

1 ln][ w

w

iiw ZxXE

Let {xi | i=1,…, w} be i.i.d. samples of random variable X, then with probability at least we have that, 1

wZ

xXEw

iiw

2

1

1 exp][Pr equivalently

Page 28: Markov Decision Processes Basics Concepts

28

Aside: Coin Flip Example

• Suppose we have a coin with probability of heads equal to p.

• Let X be a random variable where X=1 if the coin flip gives heads and zero otherwise. (so Z from bound is 1)

E[X] = 1*p + 0*(1-p) = p• After flipping a coin w times we can estimate the heads

prob. by average of xi.

• The Chernoff bound tells us that this estimate converges exponentially fast to the true mean (coin bias) p. wxp

w

iiw

2

1

1 expPr

Page 29: Markov Decision Processes Basics Concepts

29

Sampling Error Bound

))],,(([],|[),(1

0

hsTrajREsREhsVh

t

tt

approximation due to sampling

11

max ln),,,()( wh VwhsEstimateVsV

We get that,

with probability at least 1

)),,((,),,,(1

1 hsTrajRrrwhsEstimateV i

w

iiw

Can increase w to get arbitrarily good approximation.

Page 30: Markov Decision Processes Basics Concepts

Two Player MDP (aka Markov Games)

action

State/reward

action

State/reward

• So far we have only discussed single-player MDPs/games

• Your labs and competition will be 2-player zero-sum games(zero sum means sum of player rewards is zero)

• We assume players take turns (non-simultaneous moves)

Player 1 Player 2Markov Game

Page 31: Markov Decision Processes Basics Concepts

31

Simulators for 2-Player Gamesh A simulation-based representation gives: :

5 finite state set S (assume state encodes whose turn it is)5 action sets and for player 1 (called max) and player 2 (called min)

5 Stochastic, real-valued, bounded reward functions and g Stochastically return rewards for the two players max and min

respectivelyg In our zero-sum case we assume g So min maximizes its reward by minimizing the reward of max

5 Stochastic transition function T(s,a) = s’ (i.e. a simulator)g Stochastically returns a state s’ given input s and ag Probability of returning s’ is dictated by Pr(s’ | s,a) of gameg Generally s’ will be a turn for the other player

These stochastic functions can be implemented in any language!

Page 32: Markov Decision Processes Basics Concepts

32

Finite Horizon Value of Gameh Given two policies one for each player we can

define the finite horizon value

h is h-horizon value function with respect to max

5 expected total reward for player 1 after executing and starting in s for k steps

h For zero-sum games the value with respect to player 2 is just

h Given we can easily use the simulator to estimate 5 Just as we did for the single-player MDP setting

Page 33: Markov Decision Processes Basics Concepts

33

Summaryh Markov Decision Processes (MDPs) are

common models for sequential planning problems

h The solution to an MDP is a policy

h The goodness of a policy is measured by its value function5 Expected total reward over H steps

h Monte Carlo Planning (MCP) is used for enormous MDPs for which we have a simulator

h Evaluating a policy via MCP is very easy