Stochastic Model Predictive...

137
Stochastic Model Predictive Control Pantelis Sopasakis IMT Institute for Advanced Studies Lucca February 10, 2016

Transcript of Stochastic Model Predictive...

Page 1: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Stochastic Model Predictive Control

Pantelis Sopasakis

IMT Institute for Advanced Studies Lucca

February 10, 2016

Page 2: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Outline

1. Intro: stochastic optimal control

2. Classification of SMPC approaches

3. Scenario based SMPC

4. Affine disturbance feedback

1 / 94

Page 3: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

I. Introduction

X Stochastic optimal control

X Control policies

X Dynamic programming

2 / 94

Page 4: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Stochastic optimal control

Stochastic optimal control lies at the core of every stochastic MPCformulation.

3 / 94

Page 5: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Stochastic optimal control

Uncertain dynamical system:

xk+1 = f(xk, uk, wk),

where wk lives in a probability space (Ωk,Fk,Pk)1.

In stochastic optimal control, we get take our decision uk+j|k at futuretime k + j taking into account the available information up to that time.

1The probability distribution function of wk may be a function of xk and uk, thatis P = P(dwk | xk, uk). See Bertsekas and Shreve, 1978.

4 / 94

Page 6: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Stochastic OC + Causality = ♥

At k = j we observe xj and uj and we decide the control action using

I The initial information x0 (and w0),

I The current observation, that is xj (and wj)

I The history of control actions

Overall...uj = µj(x0, w0, u0, . . . , uj−1, xj , wj).

We thus construct the space ΠN = (µ0, . . . , µN−1) of (causal) controlpolicies. In some cases it suffices to assume2

uj = µj(xj).

2These are called Markov policies.

5 / 94

Page 7: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Stochastic OC + Causality = ♥

Assume we can observe wk at time k:

I k = 0 Observe x0, w0

I k = 0 Decide u0 = µ0(x0, w0)

I k = 1 System response x1 = f(x0, w0, u0)

I k = 1 Observe x1, w1

I k = 1 Decide u1 = µ1(x0, w0, u0, x1)

I k = 2 System response x2 = f(x1, u1, w1)

I k = 2 Observe x2, w2 ...

6 / 94

Page 8: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Stochastic OC + Causality = ♥

Assume we can observe wk at time k:

I k = 0 Observe x0, w0

I k = 0 Decide u0 = µ0(x0, w0)

I k = 1 System response x1 = f(x0, w0, u0)

I k = 1 Observe x1, w1

I k = 1 Decide u1 = µ1(x0, w0, u0, x1)

I k = 2 System response x2 = f(x1, u1, w1)

I k = 2 Observe x2, w2 ...

6 / 94

Page 9: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Stochastic OC + Causality = ♥

Assume we can observe wk at time k:

I k = 0 Observe x0, w0

I k = 0 Decide u0 = µ0(x0, w0)

I k = 1 System response x1 = f(x0, w0, u0)

I k = 1 Observe x1, w1

I k = 1 Decide u1 = µ1(x0, w0, u0, x1)

I k = 2 System response x2 = f(x1, u1, w1)

I k = 2 Observe x2, w2 ...

6 / 94

Page 10: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Stochastic OC + Causality = ♥

Assume we can observe wk at time k:

I k = 0 Observe x0, w0

I k = 0 Decide u0 = µ0(x0, w0)

I k = 1 System response x1 = f(x0, w0, u0)

I k = 1 Observe x1, w1

I k = 1 Decide u1 = µ1(x0, w0, u0, x1)

I k = 2 System response x2 = f(x1, u1, w1)

I k = 2 Observe x2, w2 ...

6 / 94

Page 11: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Stochastic OC + Causality = ♥

Assume we can observe wk at time k:

I k = 0 Observe x0, w0

I k = 0 Decide u0 = µ0(x0, w0)

I k = 1 System response x1 = f(x0, w0, u0)

I k = 1 Observe x1, w1

I k = 1 Decide u1 = µ1(x0, w0, u0, x1)

I k = 2 System response x2 = f(x1, u1, w1)

I k = 2 Observe x2, w2 ...

6 / 94

Page 12: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Stochastic OC + Causality = ♥

Assume we can observe wk at time k:

I k = 0 Observe x0, w0

I k = 0 Decide u0 = µ0(x0, w0)

I k = 1 System response x1 = f(x0, w0, u0)

I k = 1 Observe x1, w1

I k = 1 Decide u1 = µ1(x0, w0, u0, x1)

I k = 2 System response x2 = f(x1, u1, w1)

I k = 2 Observe x2, w2 ...

6 / 94

Page 13: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Stochastic OC + Causality = ♥

Assume we can observe wk at time k:

I k = 0 Observe x0, w0

I k = 0 Decide u0 = µ0(x0, w0)

I k = 1 System response x1 = f(x0, w0, u0)

I k = 1 Observe x1, w1

I k = 1 Decide u1 = µ1(x0, w0, u0, x1)

I k = 2 System response x2 = f(x1, u1, w1)

I k = 2 Observe x2, w2 ...

6 / 94

Page 14: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Stochastic optimal control

Hereafter we assume uk = µ(xk)3.

Three equivalent formulations:

1. In nested form

2. Over a product probability space

3. As a dynamic programming recursion

3This is an essential assumption to formulate the stochastic OCP as a DP recur-sion. This way, uk is computed at time k without using historical information ofthe process, i.e., any of w0, w1, . . . , wk−1.

7 / 94

Page 15: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Nested formulation

Formulation as a nested problem: The total cost function is (whereπ = (µ0, µ1, . . . , µN−1) with ui = µi(xi))4

VN (x0, π) = Ew0

[`0(x0, µ0(x0), w0) + Ew1

[`1(x1, µ1(x1), w1)

+ Ew2 [· · ·+ EwN−1 [`N−1(xN−1, µN−1(xN−1), wN−1)

| xN−1, µN−1(xN−1)]

| · · · ]]| x0, µ0(x0)

],

where the states xk satisfy

xk+1 = f(xk, µk(xk), wk)

for k ∈ N[0,N−2].

4It’s easy to wedge in a terminal cost function of the form `N (xN , uN , wN ) =Vf (xN , wN ).

8 / 94

Page 16: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Product space formulation

We can use the following result to rearrange the terms in VN . For everymeasure space (Ω,F ,P) and measurable h : Ω→ IR and λ ∈ (−∞,∞] wehave

λ+

∫hd P =

∫(λ+ h)dP.

And recall that the expectation of a random variable h on a probabilityspace (Ω,F ,P) is given by the Lebesgue integral

E[h] =

∫hdP.

9 / 94

Page 17: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Product space formulation

Assume `k > −∞. Then, it is

VN (x0, π) = Ew0

[Ew1

[Ew2

[. . .EwN−1 [

N−1∑k=0

`k(xk, µk(xk), wk)

| xN−1, µN−1(xN−1)] | · · ·]| x0, µ0(x0)

],

10 / 94

Page 18: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

* The product probability space

To define a product probability space, we need to introduce the notion of ap-system on Ω5 which is a collection of sets A so that

A1, A2 ∈ A ⇒ A1 ∩A2 ∈ A.

Example: On IR, the class A = (−∞, b], b ∈ IR is a p-system.

5p for product; aka π-system or pi-system.

11 / 94

Page 19: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

* The product probability space

If two (probability) measures coincide on a p-system, they coincide every-where, thus, it suffices to define a measure on a p-system.

Recall that the cartesian product of two set A, B is

A×B 3 (a, b); a ∈ A, b ∈ B.

Let (A,FA) and (B,FB) be measurable spaces; then let

A = SA × SB;SA ∈ FA, SB ∈ FB

To define a (prob.) measure on A×B it suffices to define it on A.

12 / 94

Page 20: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

* The product probability space

13 / 94

Page 21: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Product space formulation

If the conditions of Fubini’s Theorem are satisfied6, then

VN (x0, π) = E

[N−1∑k=0

`k(xk, µk(xk), wk),

]

where E is the expectation operator in the product measure space of(Ωk,Fk,Pk) for k ∈ N[0,N−1] and the states x1, x2, . . . , xN−1 arefunctions of x0 and w0, w1, . . . , wN−1 satisfying the system dynamics.

6What are these conditions? See: R. Ash, Real analysis and probability, Academicpress, 1972.

14 / 94

Page 22: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

DP recursion

It follows from the nested formulation of VN that the DP recursion is

V ?0 (x) = 0,

and for j = 0, . . . , N − 1,

V ?j+1(x) = inf

u∈UN−j(x)EwN−j

[`N−j(x, u, wN−j) + V ?

j (f(x, u, wN−j))].

Notice that tacitly we have assumed uk = µk(xk); this is a condicio sinequa non7 for DP.

7In Latin, condicio sine qua non refers to an indispensable and essential action,condition, or ingredient. We will study later the case of scenario trees where wecan aberrate from this rule.

15 / 94

Page 23: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Stochastic programming vs DP

1. In DP we are bound to assume uk = µk(xk)8; in stochastic

programming we can have uk = µk(x0, w0, . . . , wk−1, xk)

2. In DP we assume that the underlying random processw0, w1, . . . , wN−1 is stagewise independent.

3. There are cases where we can use apply DP without assumingstagewise independence; e.g., scenario trees (later).

8Such policies are known as Markovian. Whether a non-Markovian policy can bebetter than a Markovian one is a nontrivial question which is treated in Bertsekas& Shreve, 1978.

16 / 94

Page 24: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Remarks

Stochastic programming problems are very difficult to solve even for(ostensibly) simple cases such as unconstrained linear systems.

We usually have to resort to simplifying assumptions, such as:

1. Assume that the underlying process is

1.1 iid1.2 iid and normal

2. Discretisation of probability distributions (scenario trees)

3. Optimise over Markovian policies only, i.e., uk = µk(xk)

4. Optimise over semi-Markovian policies only, i.e., uk = µk(x0, xk)

5. Parametrisation of inputs, e.g., uk =∑k−1

i=0 Hiwi + hi

17 / 94

Page 25: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

A little exercise

Assume `k(·, ·, w) are convex for all w ∈ Ωk and the system dynamics islinear,

xk+1 = A(wk)xk +B(wk)uk + d(wk).

We impose the constraints uk ∈ U where U is a nonempty convex closedset. Assume that wk are stagewise independent and uk = µk(xk).

Show that V ?N (x) is a convex function.

18 / 94

Page 26: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Exercise

Assume `k(x, u, w) = x′Qkx+ u′Rku, with Qk ∈ Sn+, Rk ∈ Sn++ and thesystem dynamics is given by

xk+1 = Akxk +Buk + vk

where Ak ∼MN n×n(Ak, Uk, Vk)9 and vk ∼ Nn(dk,Σk); Ak and vk are

independent and neither of those is known at time k. Determine V ?2 (x)

using DP.

9A(wk) is a random matrix and it follows the matrix normal distribution whosedefinition and many useful properties can be found in: A.K. Gupta and D.K.Nagar, Matrix variate distributions, Chapman & Hall, 2000.

19 / 94

Page 27: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Further reading

1. D.P. Bertsekas and S.E. Shreve, Stochastic optimal control: thediscrete time case, Academic press, 1978.

2. A. Shapiro, D. Dentcheva and A.Ruszczynski, Lectures on stochasticprogramming – modeling and theory, MPS-SIAM series onoptimization, 2009.

20 / 94

Page 28: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

II. SMPC taxonomy

21 / 94

Page 29: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

System dynamics

X Linearxk+1 = A(wk)xk +B(wk)uk + d(wk)

X Nonlinearxk+1 = f(xk, uk, wk)

22 / 94

Page 30: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Type of uncertainty

X Additivexk+1 = f(xk, uk) + wk

X Parametric (linear case)

xk+1 = A(wk)xk +B(wk)uk

X Both

23 / 94

Page 31: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Uncertainty over time #1

X Time-varyingxk+1 = f(xk, uk, wk),

X Time-invariantxk+1 = f(xk, uk, w),

24 / 94

Page 32: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Uncertainty over time #2

X IID – all wk have the same probability distribution & they areindependent,

X Markovian – the probability distribution of wk+1 is conditioned bywk.

25 / 94

Page 33: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Control policy parametrisation

X Affine policy parametrization10

uk+j|k = µj(wk, wk+1|k, . . . , wk+j−1|k)

= Hjwk+j−1|k + bj

X Blocking affine policy parametrization

X Prestabilising feedback control as in stochastic tube MPC11

X Open-loop control actions12

10Kouvaritakis, Cannon and Munoz-Carpintero 2013; Oldewurtel et al. 2008; Ko-rda, Gondhalekar, Cigler, Oldewurtel 2011.

11Cannon, Kouvaritakis, Ng 2009; Cannon et al. 2011.12Kim and Braatz 2013; Bernardini and Bemporad, 2009, 2012.

26 / 94

Page 34: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Type of constraints

X Hard constraints(xk, uk) ∈ Z

X Probabilistic constraints

X IndividualP[G(i)x(t) ≤ g(i)] ≥ 1− αi∀i

X JointP[G(i)x(t) ≤ g(i),∀i] ≥ 1− α

X Expectation13

X Saturation of inputs14

13Hokayem, Cinquemani et al. 2012.14Hokayem, Chatterjee and Lygeros 2009.

27 / 94

Page 35: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Uncertainty propagation

X Stochastic tube

xk = zk + ek

uk = Kxk + ck

X Scenario-based

X Gaussian mixture

X Other

28 / 94

Page 36: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Availability of feedback information

X State

X Full state feedbackX Output feedback

X Disturbance

X Measured disturbanceX Not measured

29 / 94

Page 37: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Further reading

1. A. Mesbah, “Stochastic Model Predictive Control: A Review,” IEEE ControlSystems Magazine, 2016.

2. M. Kamgarpour, P. Hokayem, D. Chatterjee, M. Prandini, S. Garatti and A. Abate,“Final report on model predictive control for stochastic hybrid systems,” report ofproject “Moves”: http://www.movesproject.eu/deliverables/WP3/D3.2.pdf

30 / 94

Page 38: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

III. Scenario trees

X The scenario tree structure

X Causality

X DP on a scenario tree

31 / 94

Page 39: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Motivation

1. Useful for numerical computations

2. Can be constructed from observations (data-driven)

3. They model non-iid processes

4. They provide a model for uncertainty propagation

5. Assumption: Ωk are finite

32 / 94

Page 40: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Applications

I Micro-grids [Hans et al. ’15]

I Drinking water networks [Sampathirao et al. ’15]

I HVAC [Long et al. ’13, Zhang et al. ’13, Parisio et al. ’13]

I Financial systems [Patrinos et al. ’11, Bemporad et al., ’14]

I Chemical process [Lucia et al. ’13]

I Distillation column [Garrido and Steinbach, ’11]

33 / 94

Page 41: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Scenario tree structure

34 / 94

Page 42: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Definitions

I Let N − 1 be the last stage of the tree

I At stage k we have µ(k) nodes and µ(0) = 1

I The nodes at stage N − 1 are called leaf nodes

I The node at stage 0 is the root node

I A scenario is an admissible path from the root node to a leaf node

I The tree counts µ(N − 1) scenarios

I The value of ω at stage k, node i is denoted by ωikI Each node i ∈ N[1,µ(k)] at stage k ∈ N[0,N−2] has a set of children

child(k, i) ⊆ N[1,µ(k+1)]

I Each node i ∈ N[1,µ(k)] at stage k ∈ N[0,N−2] has a unique ancestorwhich is a node j ∈ N[1,µ(k−1)] at stage k − 1

35 / 94

Page 43: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Definitions

I Let N − 1 be the last stage of the tree

I At stage k we have µ(k) nodes and µ(0) = 1

I The nodes at stage N − 1 are called leaf nodes

I The node at stage 0 is the root node

I A scenario is an admissible path from the root node to a leaf node

I The tree counts µ(N − 1) scenarios

I The value of ω at stage k, node i is denoted by ωikI Each node i ∈ N[1,µ(k)] at stage k ∈ N[0,N−2] has a set of children

child(k, i) ⊆ N[1,µ(k+1)]

I Each node i ∈ N[1,µ(k)] at stage k ∈ N[0,N−2] has a unique ancestorwhich is a node j ∈ N[1,µ(k−1)] at stage k − 1

35 / 94

Page 44: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Definitions

I Let N − 1 be the last stage of the tree

I At stage k we have µ(k) nodes and µ(0) = 1

I The nodes at stage N − 1 are called leaf nodes

I The node at stage 0 is the root node

I A scenario is an admissible path from the root node to a leaf node

I The tree counts µ(N − 1) scenarios

I The value of ω at stage k, node i is denoted by ωikI Each node i ∈ N[1,µ(k)] at stage k ∈ N[0,N−2] has a set of children

child(k, i) ⊆ N[1,µ(k+1)]

I Each node i ∈ N[1,µ(k)] at stage k ∈ N[0,N−2] has a unique ancestorwhich is a node j ∈ N[1,µ(k−1)] at stage k − 1

35 / 94

Page 45: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Definitions

I Let N − 1 be the last stage of the tree

I At stage k we have µ(k) nodes and µ(0) = 1

I The nodes at stage N − 1 are called leaf nodes

I The node at stage 0 is the root node

I A scenario is an admissible path from the root node to a leaf node

I The tree counts µ(N − 1) scenarios

I The value of ω at stage k, node i is denoted by ωikI Each node i ∈ N[1,µ(k)] at stage k ∈ N[0,N−2] has a set of children

child(k, i) ⊆ N[1,µ(k+1)]

I Each node i ∈ N[1,µ(k)] at stage k ∈ N[0,N−2] has a unique ancestorwhich is a node j ∈ N[1,µ(k−1)] at stage k − 1

35 / 94

Page 46: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Definitions

I Let N − 1 be the last stage of the tree

I At stage k we have µ(k) nodes and µ(0) = 1

I The nodes at stage N − 1 are called leaf nodes

I The node at stage 0 is the root node

I A scenario is an admissible path from the root node to a leaf node

I The tree counts µ(N − 1) scenarios

I The value of ω at stage k, node i is denoted by ωikI Each node i ∈ N[1,µ(k)] at stage k ∈ N[0,N−2] has a set of children

child(k, i) ⊆ N[1,µ(k+1)]

I Each node i ∈ N[1,µ(k)] at stage k ∈ N[0,N−2] has a unique ancestorwhich is a node j ∈ N[1,µ(k−1)] at stage k − 1

35 / 94

Page 47: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Definitions

I Let N − 1 be the last stage of the tree

I At stage k we have µ(k) nodes and µ(0) = 1

I The nodes at stage N − 1 are called leaf nodes

I The node at stage 0 is the root node

I A scenario is an admissible path from the root node to a leaf node

I The tree counts µ(N − 1) scenarios

I The value of ω at stage k, node i is denoted by ωikI Each node i ∈ N[1,µ(k)] at stage k ∈ N[0,N−2] has a set of children

child(k, i) ⊆ N[1,µ(k+1)]

I Each node i ∈ N[1,µ(k)] at stage k ∈ N[0,N−2] has a unique ancestorwhich is a node j ∈ N[1,µ(k−1)] at stage k − 1

35 / 94

Page 48: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Definitions

I Let N − 1 be the last stage of the tree

I At stage k we have µ(k) nodes and µ(0) = 1

I The nodes at stage N − 1 are called leaf nodes

I The node at stage 0 is the root node

I A scenario is an admissible path from the root node to a leaf node

I The tree counts µ(N − 1) scenarios

I The value of ω at stage k, node i is denoted by ωik

I Each node i ∈ N[1,µ(k)] at stage k ∈ N[0,N−2] has a set of childrenchild(k, i) ⊆ N[1,µ(k+1)]

I Each node i ∈ N[1,µ(k)] at stage k ∈ N[0,N−2] has a unique ancestorwhich is a node j ∈ N[1,µ(k−1)] at stage k − 1

35 / 94

Page 49: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Definitions

I Let N − 1 be the last stage of the tree

I At stage k we have µ(k) nodes and µ(0) = 1

I The nodes at stage N − 1 are called leaf nodes

I The node at stage 0 is the root node

I A scenario is an admissible path from the root node to a leaf node

I The tree counts µ(N − 1) scenarios

I The value of ω at stage k, node i is denoted by ωikI Each node i ∈ N[1,µ(k)] at stage k ∈ N[0,N−2] has a set of children

child(k, i) ⊆ N[1,µ(k+1)]

I Each node i ∈ N[1,µ(k)] at stage k ∈ N[0,N−2] has a unique ancestorwhich is a node j ∈ N[1,µ(k−1)] at stage k − 1

35 / 94

Page 50: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Definitions

I Let N − 1 be the last stage of the tree

I At stage k we have µ(k) nodes and µ(0) = 1

I The nodes at stage N − 1 are called leaf nodes

I The node at stage 0 is the root node

I A scenario is an admissible path from the root node to a leaf node

I The tree counts µ(N − 1) scenarios

I The value of ω at stage k, node i is denoted by ωikI Each node i ∈ N[1,µ(k)] at stage k ∈ N[0,N−2] has a set of children

child(k, i) ⊆ N[1,µ(k+1)]

I Each node i ∈ N[1,µ(k)] at stage k ∈ N[0,N−2] has a unique ancestorwhich is a node j ∈ N[1,µ(k−1)] at stage k − 1

35 / 94

Page 51: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Definitions

Conditional probability:

pi,jk = P[ωk+1 = ωjk+1 | ωk = ωik]

We have ∑j∈child(k,i)

pi,jk = 1, for all k ∈ N[0,N−2], i ∈ N[1,µ(k)]

36 / 94

Page 52: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

* Alternative definitions

1

2

3

4

5

6

7

8

9

10

37 / 94

Page 53: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

* Alternative definitions

I We enumerate all nodes of the tree by an index α ∈ N[1,A]

I The value of ω at node α is denoted by ωαI The set of nodes at stage k is denoted by Ωk

I Each α ∈ Ωk, k ∈ N[0,N−1] defines the set of children nodeschild(α) ⊆ Ωk+1

I Each α ∈ Ωk, k ∈ N[1,N ] defines a unique ancestor anc(α) ∈ Ωk−1

I We define the probability vectors pα for α ∈ Ωk so that

X pα ∈ IR| child(α)|

X∑β∈child(α) p

α(β) = 1

X pα(β) = P[ωk+1 = β | ωk = α]

38 / 94

Page 54: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

* Alternative definitions

I We enumerate all nodes of the tree by an index α ∈ N[1,A]

I The value of ω at node α is denoted by ωα

I The set of nodes at stage k is denoted by Ωk

I Each α ∈ Ωk, k ∈ N[0,N−1] defines the set of children nodeschild(α) ⊆ Ωk+1

I Each α ∈ Ωk, k ∈ N[1,N ] defines a unique ancestor anc(α) ∈ Ωk−1

I We define the probability vectors pα for α ∈ Ωk so that

X pα ∈ IR| child(α)|

X∑β∈child(α) p

α(β) = 1

X pα(β) = P[ωk+1 = β | ωk = α]

38 / 94

Page 55: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

* Alternative definitions

I We enumerate all nodes of the tree by an index α ∈ N[1,A]

I The value of ω at node α is denoted by ωαI The set of nodes at stage k is denoted by Ωk

I Each α ∈ Ωk, k ∈ N[0,N−1] defines the set of children nodeschild(α) ⊆ Ωk+1

I Each α ∈ Ωk, k ∈ N[1,N ] defines a unique ancestor anc(α) ∈ Ωk−1

I We define the probability vectors pα for α ∈ Ωk so that

X pα ∈ IR| child(α)|

X∑β∈child(α) p

α(β) = 1

X pα(β) = P[ωk+1 = β | ωk = α]

38 / 94

Page 56: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

* Alternative definitions

I We enumerate all nodes of the tree by an index α ∈ N[1,A]

I The value of ω at node α is denoted by ωαI The set of nodes at stage k is denoted by Ωk

I Each α ∈ Ωk, k ∈ N[0,N−1] defines the set of children nodeschild(α) ⊆ Ωk+1

I Each α ∈ Ωk, k ∈ N[1,N ] defines a unique ancestor anc(α) ∈ Ωk−1

I We define the probability vectors pα for α ∈ Ωk so that

X pα ∈ IR| child(α)|

X∑β∈child(α) p

α(β) = 1

X pα(β) = P[ωk+1 = β | ωk = α]

38 / 94

Page 57: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

* Alternative definitions

I We enumerate all nodes of the tree by an index α ∈ N[1,A]

I The value of ω at node α is denoted by ωαI The set of nodes at stage k is denoted by Ωk

I Each α ∈ Ωk, k ∈ N[0,N−1] defines the set of children nodeschild(α) ⊆ Ωk+1

I Each α ∈ Ωk, k ∈ N[1,N ] defines a unique ancestor anc(α) ∈ Ωk−1

I We define the probability vectors pα for α ∈ Ωk so that

X pα ∈ IR| child(α)|

X∑β∈child(α) p

α(β) = 1

X pα(β) = P[ωk+1 = β | ωk = α]

38 / 94

Page 58: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

* Alternative definitions

I We enumerate all nodes of the tree by an index α ∈ N[1,A]

I The value of ω at node α is denoted by ωαI The set of nodes at stage k is denoted by Ωk

I Each α ∈ Ωk, k ∈ N[0,N−1] defines the set of children nodeschild(α) ⊆ Ωk+1

I Each α ∈ Ωk, k ∈ N[1,N ] defines a unique ancestor anc(α) ∈ Ωk−1

I We define the probability vectors pα for α ∈ Ωk so that

X pα ∈ IR| child(α)|

X∑β∈child(α) p

α(β) = 1

X pα(β) = P[ωk+1 = β | ωk = α]

38 / 94

Page 59: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

* Alternative definitions

I We enumerate all nodes of the tree by an index α ∈ N[1,A]

I The value of ω at node α is denoted by ωαI The set of nodes at stage k is denoted by Ωk

I Each α ∈ Ωk, k ∈ N[0,N−1] defines the set of children nodeschild(α) ⊆ Ωk+1

I Each α ∈ Ωk, k ∈ N[1,N ] defines a unique ancestor anc(α) ∈ Ωk−1

I We define the probability vectors pα for α ∈ Ωk so that

X pα ∈ IR| child(α)|

X∑β∈child(α) p

α(β) = 1

X pα(β) = P[ωk+1 = β | ωk = α]

38 / 94

Page 60: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

* Alternative definitions

I We enumerate all nodes of the tree by an index α ∈ N[1,A]

I The value of ω at node α is denoted by ωαI The set of nodes at stage k is denoted by Ωk

I Each α ∈ Ωk, k ∈ N[0,N−1] defines the set of children nodeschild(α) ⊆ Ωk+1

I Each α ∈ Ωk, k ∈ N[1,N ] defines a unique ancestor anc(α) ∈ Ωk−1

I We define the probability vectors pα for α ∈ Ωk so that

X pα ∈ IR| child(α)|

X∑β∈child(α) p

α(β) = 1

X pα(β) = P[ωk+1 = β | ωk = α]

38 / 94

Page 61: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

* Alternative definitions

I We enumerate all nodes of the tree by an index α ∈ N[1,A]

I The value of ω at node α is denoted by ωαI The set of nodes at stage k is denoted by Ωk

I Each α ∈ Ωk, k ∈ N[0,N−1] defines the set of children nodeschild(α) ⊆ Ωk+1

I Each α ∈ Ωk, k ∈ N[1,N ] defines a unique ancestor anc(α) ∈ Ωk−1

I We define the probability vectors pα for α ∈ Ωk so that

X pα ∈ IR| child(α)|

X∑β∈child(α) p

α(β) = 1

X pα(β) = P[ωk+1 = β | ωk = α]

38 / 94

Page 62: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

A few properties

For every k ∈ N[0,N−2] we have

Ωk+1 =⋃

i∈N[1,µ(k)]

child(k, i)

and, for fixed k, sets child(k, i)i are disjoint.

39 / 94

Page 63: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

A few properties

For α ∈ Ωk the family of sets child(α)α∈Ωk defines a partition in Ωk+1,that is

Ωk+1 =⋃α∈Ωk

child(α),

andα1 6= α2 ⇒ child(α1) ∩ child(α2) = ∅.

40 / 94

Page 64: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

A few properties

The probability of a scenario15, identified by a leaf node α ∈ ΩN , isdefined as

πα = P[ωN = α | ω1 = ω10],

and is given by

πα =

N−1∏i=1

panci(α),

where anc1(α) = anc(α) and

anck+1(α) = anc(anck(α)).

15Detail: This probability is defined in the product space (∏k Ωk,⊗kFk).

41 / 94

Page 65: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Notes

I We allow wik = wjk for i 6= j – it is not the value that identifies thenode16!

I We have as many scenarios as leaf nodes, that is µ(N − 1),

I Scenarios are sequences (wikk )k where ik−1 = anc(k, ik),

I Every node wik of the tree can be identified by their stage k an ascenario running through it or

I by a sequence (wijj )j leading to wik,

I The probability of a scenario (wikk )k (on the space of scenarios),which is identified by a leaf node i ∈ N[1,µ(N−1)] is given by theproduct of conditional probabilities that connect its nodes.

16it is, instead, its history

42 / 94

Page 66: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Notes

I We allow wik = wjk for i 6= j – it is not the value that identifies thenode16!

I We have as many scenarios as leaf nodes, that is µ(N − 1),

I Scenarios are sequences (wikk )k where ik−1 = anc(k, ik),

I Every node wik of the tree can be identified by their stage k an ascenario running through it or

I by a sequence (wijj )j leading to wik,

I The probability of a scenario (wikk )k (on the space of scenarios),which is identified by a leaf node i ∈ N[1,µ(N−1)] is given by theproduct of conditional probabilities that connect its nodes.

16it is, instead, its history

42 / 94

Page 67: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Notes

I We allow wik = wjk for i 6= j – it is not the value that identifies thenode16!

I We have as many scenarios as leaf nodes, that is µ(N − 1),

I Scenarios are sequences (wikk )k where ik−1 = anc(k, ik),

I Every node wik of the tree can be identified by their stage k an ascenario running through it or

I by a sequence (wijj )j leading to wik,

I The probability of a scenario (wikk )k (on the space of scenarios),which is identified by a leaf node i ∈ N[1,µ(N−1)] is given by theproduct of conditional probabilities that connect its nodes.

16it is, instead, its history

42 / 94

Page 68: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Notes

I We allow wik = wjk for i 6= j – it is not the value that identifies thenode16!

I We have as many scenarios as leaf nodes, that is µ(N − 1),

I Scenarios are sequences (wikk )k where ik−1 = anc(k, ik),

I Every node wik of the tree can be identified by their stage k an ascenario running through it or

I by a sequence (wijj )j leading to wik,

I The probability of a scenario (wikk )k (on the space of scenarios),which is identified by a leaf node i ∈ N[1,µ(N−1)] is given by theproduct of conditional probabilities that connect its nodes.

16it is, instead, its history

42 / 94

Page 69: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Notes

I We allow wik = wjk for i 6= j – it is not the value that identifies thenode16!

I We have as many scenarios as leaf nodes, that is µ(N − 1),

I Scenarios are sequences (wikk )k where ik−1 = anc(k, ik),

I Every node wik of the tree can be identified by their stage k an ascenario running through it or

I by a sequence (wijj )j leading to wik,

I The probability of a scenario (wikk )k (on the space of scenarios),which is identified by a leaf node i ∈ N[1,µ(N−1)] is given by theproduct of conditional probabilities that connect its nodes.

16it is, instead, its history

42 / 94

Page 70: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Notes

I We allow wik = wjk for i 6= j – it is not the value that identifies thenode16!

I We have as many scenarios as leaf nodes, that is µ(N − 1),

I Scenarios are sequences (wikk )k where ik−1 = anc(k, ik),

I Every node wik of the tree can be identified by their stage k an ascenario running through it or

I by a sequence (wijj )j leading to wik,

I The probability of a scenario (wikk )k (on the space of scenarios17),which is identified by a leaf node i ∈ N[1,µ(N−1)] is given by theproduct of conditional probabilities that connect its nodes.

16it is, instead, its history17We are going to give a formal definition of this space later.

42 / 94

Page 71: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

IID processes

43 / 94

Page 72: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Markov chains

E

A

E

E

E

A

A

44 / 94

Page 73: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Filtration

A filtration is an increasing sequence of σ-algebras which, here, we mayconstruct as follows

I FN−1 = 2ΩN−1 : take all subsets of ΩN−1

I Let FN−2 ⊆ FN−1 be the smallest σ-algebra containing allchild(N − 2, i) for all i.

I Recursively, construct FN−j ⊆ FN−j+1

I Eventually, F0 = ∅,ΩN−1 and recall that w0 is deterministic.

45 / 94

Page 74: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Filtration

A filtration is an increasing sequence of σ-algebras which, here, we mayconstruct as follows

I FN−1 = 2ΩN−1 : take all subsets of ΩN−1

I Let FN−2 ⊆ FN−1 be the smallest σ-algebra containing allchild(N − 2, i) for all i.

I Recursively, construct FN−j ⊆ FN−j+1

I Eventually, F0 = ∅,ΩN−1 and recall that w0 is deterministic.

45 / 94

Page 75: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Filtration

A filtration is an increasing sequence of σ-algebras which, here, we mayconstruct as follows

I FN−1 = 2ΩN−1 : take all subsets of ΩN−1

I Let FN−2 ⊆ FN−1 be the smallest σ-algebra containing allchild(N − 2, i) for all i.

I Recursively, construct FN−j ⊆ FN−j+1

I Eventually, F0 = ∅,ΩN−1 and recall that w0 is deterministic.

45 / 94

Page 76: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Filtration

A filtration is an increasing sequence of σ-algebras which, here, we mayconstruct as follows

I FN−1 = 2ΩN−1 : take all subsets of ΩN−1

I Let FN−2 ⊆ FN−1 be the smallest σ-algebra containing allchild(N − 2, i) for all i.

I Recursively, construct FN−j ⊆ FN−j+1

I Eventually, F0 = ∅,ΩN−1 and recall that w0 is deterministic.

45 / 94

Page 77: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Filtration

A filtration is an increasing sequence of σ-algebras which, here, we mayconstruct as follows

I FN−1 = 2ΩN−1 : take all subsets of ΩN−1

I Let FN−2 ⊆ FN−1 be the smallest σ-algebra containing allchild(N − 2, i) for all i.

I Recursively, construct FN−j ⊆ FN−j+1

I Eventually, F0 = ∅,ΩN−1 and recall that w0 is deterministic.

45 / 94

Page 78: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Filtration

46 / 94

Page 79: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Filtration

47 / 94

Page 80: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Filtration

48 / 94

Page 81: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Filtration

Some remarks:

I Every node i ∈ N[1,µ(k)] at a stage k corresponds to an event in Fk.

I The cardinality of Fk is 2|Ωk| (why?)

I The product space (Ω,F ,P) =∏k(Ωk,Fk,Pk), equipped with the

filtration Fkk becomes a filtered probability space.

49 / 94

Page 82: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

State sequence

The state dynamics is given by xk+1 = f(xk, uk, wk), so, starting at k = 0from a state x0 and knowing w0 the predicted state sequence is18

x1 = f(x0, µ0(x0, w0), w0)

= f(x0, u0, w0)

and for all j ∈ N[0,N−2]

xj+1 = f(xj , µj(x0,wj , xj), wj)

where wj = (w0, w1, . . . , wj).

18Warning: abuse of notation! Here x0 is an observation whereas xk is an esti-mate of a future state (which is a random variable). Thus, x2 is not the statemeasurement at time k = 2; A more proper notation would be xk+2|k.

50 / 94

Page 83: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

State sequence

51 / 94

Page 84: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Decision making across the tree nodes

I At k = 0 we know x0 and w0, so we decide u0

I At k = 1 the state will be x1 = f(x0, u0, w0) and we observe w1 ∈ Ω1

I For each i ∈ N[1,µ(1)] we decide a ui1 and apply it to the system

I The next state will be xi2 = f(x1, ui1, w

i1), i ∈ N[1,µ(1)]

I At stage k we decide the input uk according to the informationavailable so far, so

uk = µk(x0,wj , xj)

I or, equivalenty, we choose uik at each node of the tree at stage k.

52 / 94

Page 85: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

State sequence

The state sequence becomes

xik+1 = f(xjk, uik, w

ik),

for all k ∈ N[0,N−2] and i ∈ N[1,µ(k+1)], where j = anc(k, i).

53 / 94

Page 86: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Adaptation to Fk

By construction ukk is an Fk-adapted random process (why?) and

E [uk | Fk] = uk.

Sequence xkk is a predictable process wrt Fkk, i.e., xk+1 isFk-adapted, that is

E [xk+1 | Fk] = xk+1.

Indeed, recall that

xk+1 = f(xk, µk(x0,wk, xk), wk).

54 / 94

Page 87: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

DP on scenario trees

Assuming xk and wk are observable at time k; we decide uik at every edgeof the tree, i.e., one input for every node of the w-tree.

55 / 94

Page 88: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

DP on scenario trees

Exercise.Solve the DP problem assuming xk and wk are observable at time k.Assume linear dynamics of the form19

xik+1 = A(wik)xjk +B(wik)u

ik + wik,

for all k ∈ N[0,N−2] and i ∈ N[1,µ(k+1)], where j = anc(k, i).

19For convenience we may denote Aik = A(wik) and Bik = B(wik). Since wk isobservable at time k, we also observe Ak and Bk.

56 / 94

Page 89: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

DP on scenario trees

Assuming only xk is observable at time k; we decide uik at every edge ofthe x-tree, i.e., as in the following figure:

57 / 94

Page 90: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Scenario-based SMPC

We’ll see how to design a MS-stabilising SMPC using scenario treerepresentations of uncertainty.

58 / 94

Page 91: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Definition of LV

Consider the linear autonomous system

xk+1 = f(xk, wk).

Assume wk is not observable at time k and let V be a function whichmaps a x ∈ IRn to a IR+-valued random variable20 for which we define therandom variable

LV (xk) := E[V (xk+1)− V (xk) | Fk].

20We’ll avoid the details to keep the notation reasonably simple.

59 / 94

Page 92: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

A useful property of LV

A useful property of LV (the discrete version of Dynkin’s formula.): For0 ≤ k1 ≤ k2:

E [V (xk2)− V (xk1) | Fk1 ] = E

k2∑j=k1

LV (xk)

∣∣∣∣∣ Fk1

Note: To prove this we only need to use the tower property:

H1 ⊆ H2 ⇒ E [E [X | H2] | H1] = E[X | H1],

where X is a r.v. on (Ω,F ,P) and Hi are sub-σ-algebras of F .

60 / 94

Page 93: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

A useful property of LV

Proof. Take 0 ≤ k1 ≤ k2 and

V (xk2)− V (xk1) =

k2∑j=k1

V (xj+1)− V (xj)

⇒ E [V (xk2)− V (xk1) | Fk1 ] = E

k2∑j=k1

V (xj+1)− V (xj) | Fk1

= E

k2∑j=k1

E[V (xj+1)− V (xj) | Fj ] | Fk1

= E

k2∑j=k1

LV (xk)

∣∣∣∣∣ Fk1 .

61 / 94

Page 94: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Lyapunov theorem (MSS)

Assume that for all x ∈ IRn we have

LV (x) ≤ −γ‖x‖2,

for some γ > 0, then xk+1 = f(xk, wk) is MSS21.

Proof (Exercise). Use the property of LV with k1 = 0 and k2 = k.

21We further have that E[‖xk‖2 | F0

]k is an `2 sequence.

62 / 94

Page 95: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Lyapunov theorem (MSS)

If for all x ∈ IRn we have

LV (x) ≤ −α(‖x‖2),

for some convex K-class function α, then xk+1 = f(xk, wk) is MSS.

Proof. The proof is an exercise. Show with an example that the convexityrequirement cannot be omitted. Show also that the we can alternativelyuse the condition LV (x) ≤ −x′Lx, where L = L′ 0.

63 / 94

Page 96: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Lyapunov theorem (MSES)

If for all x ∈ IRn we have

LV (x)≤ −γ‖x‖2,α‖x‖2 ≤ V (x) ≤ β‖x‖2,

for some α, β, γ > 0, then xk+1 = f(xk, wk) is MSES.

Proof. Easy exercise.

64 / 94

Page 97: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

MS-stabilising stochastic MPC

Assume that wk over (Ω,F ,P) is IID. We formulate the following SMPCproblem (unconstrained case) [Bernardini and Bemporad, 2012]:

V ?(xk) = minπ=µiN−1

i=0

EVN (xk, π)

xk|k = xk,

xk+i+1|k = A(wk+i|k)xk+i|k +B(wk+i|k)µi(xk+i|k), ∀i ∈ N[0,N−1]

LV (xk|k) ≤ −x′k|kLxk|k,

for some L = L′ 0, where V (x) = x′Px and

LV (xk|k) = E[V (xk+1|k)− V (xk) | xk|k]

65 / 94

Page 98: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

MS-stabilising stochastic MPC

Because of the constraint

LV (xk|k) ≤ −x′k|kLxk|k,

the control lawuk = µ0(xk),

leads to a MSES closed-loop system (this SMPC problem is recursivelyfeasible).

66 / 94

Page 99: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

MS-stabilising stochastic MPC

NOTE:We can choose any cost function VN (xk, π)!

67 / 94

Page 100: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

What about those trees?

Hold on... we’ll get there.

68 / 94

Page 101: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Inexact knowledge of P(dwk)

We previously assumed that:

I wk is an IID process on a probability space (Ω,F ,P)

I The probability measure P is exactly known (and time invariant)

I In case Ω is finite and F = 2Ω if suffices to know p = (P(wi))si=1 forall wi ∈ Ω,

I When Ω is finite we can identify probability measures with vectors

I LetD =

p ∈ IRs : p ≥ 0, 1′sp = 1

,

thus any probability measure p is an element of D.

I A set P ⊆ D is a set of probability measures.

69 / 94

Page 102: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Inexact knowledge of P(dwk)

We previously assumed that:

I wk is an IID process on a probability space (Ω,F ,P)

I The probability measure P is exactly known (and time invariant)

I In case Ω is finite and F = 2Ω if suffices to know p = (P(wi))si=1 forall wi ∈ Ω,

I When Ω is finite we can identify probability measures with vectors

I LetD =

p ∈ IRs : p ≥ 0, 1′sp = 1

,

thus any probability measure p is an element of D.

I A set P ⊆ D is a set of probability measures.

69 / 94

Page 103: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Inexact knowledge of P(dwk)

We previously assumed that:

I wk is an IID process on a probability space (Ω,F ,P)

I The probability measure P is exactly known (and time invariant)

I In case Ω is finite and F = 2Ω if suffices to know p = (P(wi))si=1 forall wi ∈ Ω,

I When Ω is finite we can identify probability measures with vectors

I LetD =

p ∈ IRs : p ≥ 0, 1′sp = 1

,

thus any probability measure p is an element of D.

I A set P ⊆ D is a set of probability measures.

69 / 94

Page 104: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Inexact knowledge of P(dwk)

We previously assumed that:

I wk is an IID process on a probability space (Ω,F ,P)

I The probability measure P is exactly known (and time invariant)

I In case Ω is finite and F = 2Ω if suffices to know p = (P(wi))si=1 forall wi ∈ Ω,

I When Ω is finite we can identify probability measures with vectors

I LetD =

p ∈ IRs : p ≥ 0, 1′sp = 1

,

thus any probability measure p is an element of D.

I A set P ⊆ D is a set of probability measures.

69 / 94

Page 105: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Inexact knowledge of P(dwk)

We previously assumed that:

I wk is an IID process on a probability space (Ω,F ,P)

I The probability measure P is exactly known (and time invariant)

I In case Ω is finite and F = 2Ω if suffices to know p = (P(wi))si=1 forall wi ∈ Ω,

I When Ω is finite we can identify probability measures with vectors

I LetD =

p ∈ IRs : p ≥ 0, 1′sp = 1

,

thus any probability measure p is an element of D.

I A set P ⊆ D is a set of probability measures.

69 / 94

Page 106: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Inexact knowledge of P(dwk)

We previously assumed that:

I wk is an IID process on a probability space (Ω,F ,P)

I The probability measure P is exactly known (and time invariant)

I In case Ω is finite and F = 2Ω if suffices to know p = (P(wi))si=1 forall wi ∈ Ω,

I When Ω is finite we can identify probability measures with vectors

I LetD =

p ∈ IRs : p ≥ 0, 1′sp = 1

,

thus any probability measure p is an element of D.

I A set P ⊆ D is a set of probability measures.

69 / 94

Page 107: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Inexact knowledge of P(dwk)

We will now drop the IID assumption and assume that wk is a randomvariable over (Ω,F ,Pk) where Ω = wisi=1 is finite.

We define for P ∈ D

LV (x, µ,P) := EP[V (xk+1|k)− V (xk|k) | xk|k = x]

=

∫ΩV (xk+1|k)− V (xk) P(dw | xk|k = x)

=

s∑i=1

P[wk = wi]︸ ︷︷ ︸pi

V (A(wi)x+B(wi)µ(x))− V (x)

70 / 94

Page 108: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Inexact knowledge of P(dwk)

In that case, the MS-stabilising constraint becomes:

LV (x, µ0,P) ≤ −x′Lx,∀P ∈ D,

for some µ0.

No probabilistic information

Exact knowledge of the prob. measure

This is reminiscent of the worst-case distribution approach22.

22See A. Shapiro, “Worst-case distribution analysis of stochastic programs,” Math.Prog. 107(1), pp. 91-96, 2005.

71 / 94

Page 109: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

MS-stabilising scenario-based SMPC

MS-stabilising SMPC formulation:

V ?(xk) = minπ=µiN−1

i=0

EVN (xk, π)

xk|k = xk,

xlk+i+1|k = A(wlk+i|k)xιk+i|k +B(wlk+i|k)u

lk+i|k,

∀i ∈ N[0,N−1], ι = anc(i, l), l ∈ N[1,µ(i+1)]

LV (xk+1|k, uk|k,P) ≤ −x′kLxk,∀P ∈ P

If P ⊆ D is a polytope with vertices PκKκ=1, we need to impose theabove stabilising constraint only for those vertices, i.e.,

LV (x1, u0,Pκ) ≤ −x′0Lx0,∀κ ∈ N[1,K]

72 / 94

Page 110: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

The constrained case

I The same approach applies to state/input-constrained system so longas the formulation is recursively feasible

I For details see: D. Bernardini and A. Bemporad, “Stabilizing ModelPredictive Control of Stochastic Constrained Linear Systems,” IEEETAC 57(6), pp. 1468–1480, 2012.

73 / 94

Page 111: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

IV. Affine disturbance feedback

74 / 94

Page 112: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Problem statement – dynamics

We’ll be studying a very simple case. The system dynamics is given by

xk+1 = Axk +Buk +Gwk,

where wk is an iid process with wk ∼ N (0, I). The disturbance wk is notobservable at time k.

75 / 94

Page 113: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Problem statement – constraints

The system is subject to hard input constraints

Huuk ≤ Ku,

and probabilistic state constraints (stage-wise):

P[H lx · xk ≤ K l

x] ≥ 1− αl, l ∈ N[1,s]

where H lx are vectors and K l

x are scalars and αl ∈ [0, 1].

76 / 94

Page 114: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Problem statement – policies

Along the horizon the inputs uk+j|k are determined by causal control lawsof the form

uk+j|k = ψk+j|k(xk|k, xk+1|k, . . . , xk+j|k)

= µk+j|k(xk|k, wk|k, . . . , wk+j−1|k),

so, along the horizon, we need to determine

µµµk =(uk|k, µk+1|k, . . . µk+N−1|k

)which is a sequence of functions. Functions are infinite dimensionalobjects, so the optimisation problem becomes intractable.

77 / 94

Page 115: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

The stochastic MPC problem

We formulate the following optimisation problem:

V ?N (xk) = min

µµµkEVN (xk,µµµk),

subject to

xk+j+1|k = Axk+j|k +Buk+j|k +Gwk+j|k,∀j ∈ N[0,N−1]

uk+j|k = µk+j|k(xk|k, wk|k, . . . , wk+j−1|k), ∀j ∈ N[0,N−1]

Huuk+j|k ≤ Ku,∀j ∈ N[0,N−1]

P[hlx · xk+j|k ≤ klx] ≥ 1− αl, ∀l ∈ N[1,s], ∀j ∈ N[1,N ]

wk+j|k ∼ N (0, I), ∀j ∈ N[0,N−1]

xk|k = xk

78 / 94

Page 116: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Affine disturbance feedback

For convenience define

wk = (wk|k, wk+1|k, . . . , wk+N−1|k).

We restrict ourselves to causal policies whose functions have the simpleform

µk+j|k(wk) = mj +

j−1∑i=0

Mj,iwk+j|k

We then need to determine Mj,i and mj , so the opt. problem becomesfinite dimensional.

79 / 94

Page 117: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Affine disturbance feedback

The affine disturbance feedback policy can be concisely written as

µµµk(wk) = Mwk + m,

where

M =

0 . . . . . . 0 0

M1,0 0 . . . 0 0...

MN−1,0 MN−1,1 . . . MN−1,N−2 0

, m =

m0

m1...

mN−1

80 / 94

Page 118: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Finite dimensional problem

Define also

xk = (xk, xk+1|k,...,xk+N|k),

uk = (uk|k, uk+1|k,...,uk+N−1|k).

The state evolution of the system, xk given a sequence of inputs uk and asequence of disturbances wk is given by23

xk = Axk + Buk + Gwk.

23Exercise: Determine A, B and G.

81 / 94

Page 119: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Probabilistic constraints

... and substituting uk = µµµk(wk):

xk = Axk + Buk + Gwk

= Axk + BMwk + Gwk + Bm

= Axk + (BM + G)wk + Bm

Then, the probabilistic constraints are written as

P[Hlx(Axk + (BM + G)wk + Bm) ≤ Kl

x] ≥ 1− αl

⇔ P[Hlx(BM + G)wk + Hl

x(Axk + Bm) ≤ Klk] ≥ 1− αl

82 / 94

Page 120: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Distribution function

Let X be a random value over a probability space (IR,BIR,P). Thedistribution of X is a measure µ : BIR → [0, 1] with

µ(B) = P(X ∈ B)

= P(ω | X(ω) ∈ B).

The distribution of X is identified by the following function (why?) knownas distribution function of X, c : IR→ [0, 1]

c(x) = µ((−∞, x])

= P(X < x)

= P(ω ∈ Ω | X(ω) < x).

83 / 94

Page 121: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Quantile function

We define the quantile function of X, Q : [0, 1]→ IR as

Q(p) = infx ∈ IR : p ≤ c(x)= infx ∈ IR : P[X ≤ x] ≥ p

This is a type of inverse of the distribution function.

84 / 94

Page 122: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Normal distribution

When X is normally distributed, X ∼ N (µ, σ), Q(p) is given by anexplicit formula, that is

Q(p) = µ+ 2√σ erf−1(2p− 1).

As a result

P[X ≤ x] ≥ 1− α⇔ x ≥ Q(1− α).

85 / 94

Page 123: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Multivariate Normal distribution

When X is normally distributed, X ∼ N (x,Σ) and Z := AX + b, then

Z ∼ N (Ax+ b, AΣA′).

86 / 94

Page 124: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

A little observation

Let y1 ∼ N (0n, In) and y2 ∼ N (0n, In) and y1, y2 are independentrandom variables. Then

y := (y1, y2) ∼ N (02n, I2n).

In our casewk ∼ N (0, I).

87 / 94

Page 125: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Probabilistic constraints revisited

The probabilistic constraints are written as

P[Hlx(BM + G)︸ ︷︷ ︸row-vector

wk︸︷︷︸N (0,I)

≤ gl] ≥ 1− αl

where gl = Klk −Hl

x(Axk + Bm). This becomes

Q(1− αl)‖Hlx(BM + G)‖2 ≤ gl,

where Q is the quantile function of N (0, 1). This leads to the formulationof a second-order cone problem.

88 / 94

Page 126: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Simplification

Consider the following constraints with parameter βl > 0Hlx(BM + G)wl

k + Hlx(Axk + Bm) ≤ Kl

x

‖wlk‖∞ ≤ βl

Then, a posteriori we can determine the probability

P[Hlxxk ≤ Kl

x | ‖wlk‖∞ ≤ βl],

given that wlk ∼ N (0, I)

89 / 94

Page 127: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Simplification

But we know that

P[Hlxxk > Kl

k | ‖wlk‖ ≤ βl] ≤

√eβle−

(βl)2

2 ,

so for the probabilistic constraints to be satisfied it suffices to choose βl sothat

√eβle−

(βl)2

2 ≤ αl

90 / 94

Page 128: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Discussion

X Affine disturbance feedback and affine state feedback are equivalent[Goulart et al., ’05]

X it is a suboptimal choice in the space of measurable functions

X But it is a computationally tractable approach

X The problem size explodes as the prediction horizon increases

X But there are approximating techniques such as the blocking affineparametrization.

91 / 94

Page 129: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Discussion

X Affine disturbance feedback and affine state feedback are equivalent[Goulart et al., ’05]

X it is a suboptimal choice in the space of measurable functions

X But it is a computationally tractable approach

X The problem size explodes as the prediction horizon increases

X But there are approximating techniques such as the blocking affineparametrization.

91 / 94

Page 130: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Discussion

X Affine disturbance feedback and affine state feedback are equivalent[Goulart et al., ’05]

X it is a suboptimal choice in the space of measurable functions

X But it is a computationally tractable approach

X The problem size explodes as the prediction horizon increases

X But there are approximating techniques such as the blocking affineparametrization.

91 / 94

Page 131: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Discussion

X Affine disturbance feedback and affine state feedback are equivalent[Goulart et al., ’05]

X it is a suboptimal choice in the space of measurable functions

X But it is a computationally tractable approach

X The problem size explodes as the prediction horizon increases

X But there are approximating techniques such as the blocking affineparametrization.

91 / 94

Page 132: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Discussion

X Affine disturbance feedback and affine state feedback are equivalent[Goulart et al., ’05]

X it is a suboptimal choice in the space of measurable functions

X But it is a computationally tractable approach

X The problem size explodes as the prediction horizon increases

X But there are approximating techniques such as the blocking affineparametrization.

91 / 94

Page 133: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Discussion (cont’d)

X Recursive feasibility is impossible to guarantee when the disturbanceis unbounded

X Recursively feasible formulations are — as expected — tooconservative [M. Korda et al. ’11]

X We can however design a probabilistically resolvable control scheme[M. Ono, ’12]

92 / 94

Page 134: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Discussion (cont’d)

X Recursive feasibility is impossible to guarantee when the disturbanceis unbounded

X Recursively feasible formulations are — as expected — tooconservative [M. Korda et al. ’11]

X We can however design a probabilistically resolvable control scheme[M. Ono, ’12]

92 / 94

Page 135: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Discussion (cont’d)

X Recursive feasibility is impossible to guarantee when the disturbanceis unbounded

X Recursively feasible formulations are — as expected — tooconservative [M. Korda et al. ’11]

X We can however design a probabilistically resolvable control scheme[M. Ono, ’12]

92 / 94

Page 136: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Further reading

1. M. Lorenzen, F. Dabbene, R. Tempo and F. Allgower, Constraint-Tightening andStability in Stochastic Model Predictive Control, arXiv: 1511.03488v1, 2015.

2. M. Oldewurtel, C.N. Jones, M. Morari, A Tractable Approximation of ChanceConstrained Stochastic MPC based on Affine Disturbance Feedback, 2008.

3. M. Ono, Closed-Loop Chance-Constrained MPC with Probabilistic Resolvability,51st IEEE CDC, Maui, Hawaii, USA, 2012.

4. M. Cannon, B. Kouvaritakis, D. Ng, Probabilistic tubes in linear stochastic modelpredictive control, Systems & Control Letters 58, pp. 747–753, 2009.

5. D. Chatterjee, P. Hokayem, J. Lygeros, Stochastic receding horizon control withbounded control inputs: a vector space approach, arXiv:0903.5444v3, 2009.

6. M. Korda, R. Gondhalekar, J. Cigler, and F. Oldewurtel, Strongly FeasibleStochastic Model Predictive Control, IEEE CDC, 2011.

7. M.Cannon, P. Couchman and B. Kouvaritakis, MPC for stochastic systems,Chapter in Assessment and Future Directions of Nonlinear Model PredictiveControl, Springer, 2007.

93 / 94

Page 137: Stochastic Model Predictive Controldysco.imtlucca.it/atcs/course-material/stochastic-mpc.pdfStochastic optimal control Hereafter we assume u k= (x k)3. Three equivalent formulations:

Further reading

8. M. Cannon, Stochastic Model Predictive Control: State space methods, Lecturenotes available online at http://users.ox.ac.uk/∼engs0169/pdf/cannon ifac08c.pdf

9. H. Kushner, Introduction to stochastic control, Holt, Rinehart and Winston Inc.,New York, 1971.

10. M. Agarwal, E. Cinquemani, D. Chatterjee and J. Lygeros, On convexity ofstochastic optimization problems with constraints, European Control Conference(ECC), Budapest, Hungary, 2009.

94 / 94