Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks:...

37
Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information Engineering Cornell University and Institute for Data and Decision Analytics (iDDA) Chinese University of Hong Kong, Shenzhen May 14, 2018 Jim Dai 1 / 37

Transcript of Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks:...

Page 1: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Stochastic processing networks:steady-state 23 diffusion approximations

Jim Dai

School of Operations Research and Information EngineeringCornell University

andInstitute for Data and Decision Analytics (iDDA)

Chinese University of Hong Kong, Shenzhen

May 14, 2018

Jim Dai 1 / 37

Page 2: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Collaborators

Masakiyo Miyazawa, Department of Information Sciences, Tokyo Universityof Science

Anton Braverman, Kellogg School of Business, Northwestern University

Chang Cao, Department of Statistical Science, Cornell University

Xiangyu Zhang, School of ORIE, Cornell University

Jim Dai 2 / 37

Page 3: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

An example of generalized Jackson networks (GJNs)

ExternalArrivalsλr

Re,1(t)L1 Rs,1

L2 Rs,2

L3 Rs,30.7

0.3

0.5

0.5

P =

0 .5 .51 0 0.7 0 0

Jim Dai 3 / 37

Page 4: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Indirect method: limit interchange for GJN

L(t) L(∞)

L(r)(t) L(r)(∞)

r → 0 r → 0

t →∞

t →∞

Harrison-Williams (1987)

Reiman (1984)L is an SRBM

Down-Meyn (1994)

Gamarnik-Zeevi (2006)

Jim Dai 4 / 37

Page 5: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Gamarnik-Zeevi (2006): indirect method

Budhiraja and Lee (2009): second moments and uniform integrability

Gurvich (2014), multiclass queueing networks

Ye-Yao (2015), (head-of-line) bandwidth sharing networks

There is a growing literature: Tezcan (2008), Zhang-Zwart (2008), Katsuda(2010), Gamarnik-Stolyar (2012), D-Dieker-Gao (2014), and more...

Jim Dai 5 / 37

Page 6: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Basic adjoint relationship (BAR): three direct methods

Stein’s method; Gurvich (2014), Braverman-Dai (2017)Strongest results, but difficult for general systems

Moment generating function (mgf)-BAR approach: Miyazawa (2015),Braverman-Dai-Miyazawa (2017) for GJN

Drift method (Quadratic-BAR approach): Eryilmaz and Srikant (2012,Maguluri-Srikant (2016), Wang-Maguluri-Srikant-Ying (2017)

Surprisingly successful for single-pass systems,

Jim Dai 6 / 37

Page 7: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

A two-link bandwidth sharing network

S1B3

B1S2

B2

class 3departures

class 1departures

class 2departures

class 1arrivals

class 2arrivals

class 3arrivals

Insensitivity of jobsize distributions in heavy traffic.Full version, beyond moments, of the conjecture remains open a problem.

Jim Dai 7 / 37

Page 8: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Jim Dai 8 / 37

Page 9: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Today’s topics

TopicsMgf approach for multiclass queueing networks under priority policies:Braverman, Dai and Miyazawa.

Strong state space collapse: Cao, Dai, Miyazawa, Zhang.

OutlineAn illustrative theorem for a re-entrant line

Mgf approach for GI/GI/1 queues

Proof sketches

An open problemSelf-contained tools

Jim Dai 9 / 37

Page 10: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

A 2-station, 3-class reentrant line

S1B1 B2 S2

B3class 1arrivals

class 3departures

{(Te(i),Ts,1(i),Ts,2(i),Ts,3 (i)

), i ≥ 1

}is an iid sequence with mean

(1,m1,m2,m3) and finite second moment.The interarrival times are Te(i), satisfying the heavy traffic condition

λ = 1− r , m1 + m3 = 1, m2 = 1, r ↓ 0.

Load parameters

βi = λmi , i = 1, 2, 3, ρ1 = β1 + β3 = λ < 1, ρ2 = β2 = λ < 1.

Jim Dai 10 / 37

Page 11: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

System dynamics

Assuming (preemptive-resume) LBFS policy, one has the Markovianrepresentation: X = {X (t)}t≥0 is Markov process, where state at time t is

X (t) = (L(t),Re(t),Rs(t)).

Li (t) = number of customers in buffer i at time t (including the one inservice)Re(t) = residual time until next exogenous arrivalRs,i (t) = residual time until next service completion in buffer iThe reentrant in the figure has a 7-dimensional representation:(

L1(t),Re(t),Rs,1(t)),(L2(t),Rs,2(t)

),(L3(t),Rs,3(t)

)

Jim Dai 11 / 37

Page 12: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

A sample result

When λ = 1− r < 1, Markov process X is positive Harris recurrent (Dai1995): as t →∞,

Lr (t) =⇒ Lr (∞) = (Lr1(∞), Lr

2(∞), Lr3(∞)).

Theorem 1

rLr (∞) =⇒(

L∗1(∞), L∗2(∞), 0)

as r → 0.

rLr3(∞)⇒ 0, state space collapse (SSC).(

rLr1(∞), rLr

2(∞))

=⇒(

L∗1(∞), L∗2(∞)).

Jim Dai 12 / 37

Page 13: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

State space collapse

The pre-limit is a K -dimensional problem, and the limit is a d-dimensionalproblem, where K is the number of buffers and d is the number of stations.

IMPL

AN

T_O

X

LT

O

OX

IDE

_1

BA

RR

IER

_OX

POL

Y_D

EP

POL

Y_D

OPE

INT

ER

GA

TE

CR

IT_D

EV

VW

R_O

VE

N

ME

D_C

UR

RE

NT

_IM

P

NO

NC

RIT

_DE

V

PEA

KR

EFL

OW

BPS

G

MA

TR

IX

HIG

H_C

UR

RE

NT

_IM

P

85 S

TA

ION

S2

PRO

DU

CT

SPR

OD

UC

T1:

210

ST

EPS

PRO

DU

CT

2: 2

45 S

TE

PSIN

TH

E F

IGU

RE

:

16

ST

AT

ION

S

48 S

TE

PS

OR

IGIN

AL

:

Jim Dai 13 / 37

Page 14: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Strong SSC

In proving a version of Theorem 1 for general multiclass queueing networksunder priority policies, we need the following strong SSC:

rEν [Lr3(∞)]→ 0.

Theorem 2Assume interarrival and service times have phase-type distributions. Assume thatthe critically loaded fluid model has SSC. Then

supr∈(0,r0]

∑i∈H

Eν [Lri (∞)]p <∞ for p ≥ 1,

which implies the strong SSC.

Jim Dai 14 / 37

Page 15: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Fluid model SSC

S1B1 B2 S2

B3class 1arrivals

class 3departures

Fluid model (L1(0) can be ∞ )

L1(t) = L1(0) + t − µ1T1(t),L2(t) = L2(0) + µ1T1(t)− µ2T2(t),L3(t) = L3(0) + µ2T2(t)− µ3T3(t),

where µi = 1/mi is service rate for class i jobs.H = {3}. When L3(t) > 0, T3(t) = 1, which implies

L3(t) = µ2T2(t)− µ3 ≤ µ2 − µ3 < 0.

L3(t) = 0 for t ≥ L3(0)/(µ3 − µ2).Jim Dai 15 / 37

Page 16: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

LBFS priority policy?

- -

-

-

-

-

m1 m2

m3 m4

m5

Station 1 Station 2α1

Under the LBFS-FBFS priority policy and λ(m2 + m5) > 1,

Time0 4000 8000 12000

Job

coun

t

0

1000

2000

3000

4000Station 1Station 2

Jim Dai 16 / 37

Page 17: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Miyazawa’s mgf approach

For a family of nonnegative random vector L(r)(∞) ∈ Rd+,

L(r)(∞)⇒ L(∞) for some random vector L(∞)

if and only if the mgf converges

φ(r)(θ) = E[e〈θ,L(r)(∞)〉]→ E[e〈θ,L(∞)〉] for all θ ≤ 0.

The family {L(r)(∞)} is tight iff for any sequence rn → 0,

limn→∞

φ(rn)(θ)→ φ(θ) ∀θ ≤ 0 implies that

φ(0−) = φ(0−, . . . , 0−) = 1. (1)

Equation (1) says φ(·) is left continuous at 0.

Jim Dai 17 / 37

Page 18: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

GI/GI/1 queue

S1B1 departuresarrivals

{Te(i), i ≥ 1} iid interarrival times; λ = 1/E[Te(i)];{Ts(i), i ≥ 1} iid service times; µ = 1/E[Ts(i)].Heavy traffic condition

λ = µ− r with r ↓ 0.

X = {X (t), t ≥ 0} is a Markov process, where

X (t) = (L(t),Re(t),Rs(t)),

whereL(t) is the number of customers in system,Re(t) is remaining interarrival times,Rs(t) is the remaining service times.

Jim Dai 18 / 37

Page 19: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Piecewise deterministic Markov process (PDMP)

The process X = (L,Re ,Rs) is a piecewise deterministic Markov process(PDMP); Davis (1981)A sample path of a PDMP is composed of two parts, deterministic andcontinuous sections and (random) jumps due to expiration of remainingtimes.

0t

Te(1) Te(2) Te(3) Te(4)

Ts(4)

Ts(3)

Ts(2)

Te(5)

Ts(1)

L(t)

Re(t)Rs(t)

Figure: A sample path of remaining times for GI/G/1 queue

Jim Dai 19 / 37

Page 20: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Change of variables for PDMP

Consider function f (x) = f (z , u, v) : Z+ × R+ × R+ → R.Define the jump size ∆f (X (s)) = f (X (s))− f (X (s−)),

f (X (t))− f (X (0)) =∫ t

0

dds f (X (s)) ds +

∑si∈(0,t]

∆f (X (si ))

= −∫ t

0

[ ∂∂u f (X (s))− 1{Z (s) > 0} ∂

∂v f (X (s))]ds (continuous)

+[∫ t

0∆f (X (s))dNA(s) +

∫ t

0∆f (X (s))dND(s)

](jumps)

NA – arrival process.ND – departure process.

Jim Dai 20 / 37

Page 21: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Full BAR in GI/GI/1 setting

Assume X has stationary distribution ν.

Basic Adjoint Relationship (BAR)

0 =tEν[− ∂

∂u f (X (0))− 1{Z (0) > 0} ∂∂v f (X (0)

]+ (continuous)

+Eν

[∫ t

0∆f (X (s))dNA(s) +

∫ t

0∆f (X (s))dND(s)

](jumps)

Jump terms are intractable; getting rid of arrival-jump term requires

ETe

[f (L + 1,Te ,Rs)− f (L, 0,Rs)

]= 0 (2)

for any given L and Rs .Similarly,

ETs

[f (L− 1,Re ,Ts)− f (L,Re , 0)

]= 0 (3)

for any given L ≥ 1 and Re .Jim Dai 21 / 37

Page 22: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Exponential functions (I)

Fix a θ ≤ 0, take

f (θ; z , u, v) = eθz+a(θ)u+bθv .

Then

E[f (z + 1,Te , v)− f (z , 0, v)] = 0,Eeθ(z+1)+a(θ)Te+b(θ)v = eθz+b(θ)v ,

E[eθ+a(θ)Te

]= 1.

For each θ ≤ 0, find a = a(θ) such that

E[ea(θ)Te

]= e−θ. (4)

Jim Dai 22 / 37

Page 23: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Exponential functions (II)

Similarly, for each θ ≤ 0, find b(θ) such that

E[eb(θ)Ts ] = eθ. (5)

Then

E[f (z − 1, u,Ts)− f (z , u, 0)] = 0 for z ≥ 1.

Jim Dai 23 / 37

Page 24: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Exponential functions (III): Summary

Define a(θ) and b(θ) via (4) and (5). Set

f θ; z , u, v) = eθz+a(θ)u+b(θ)v .

Then,

∂u f (θ; z , u, v) = a(θ)f (θ; z , u, v),

∂v f (θ; z , u, v) = b(θ)f (θ; z , u, v).

Thus, BAR becomes

Eν[− a(θ)f (θ; X (0))− b(θ)1{L(0)>0}f (θ; X (0))

]= 0,

or equivalently

[a(θ) + b(θ)]Eν [f (θ; X (0))]− b(θ)Eν[1{L(0)=0}f (θ; X (0))

]= 0. (6)

Jim Dai 24 / 37

Page 25: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Prelimit (restricted) BAR

From (6), for θ ≤ 0,

[a(θ) + b(θ)]Eν [f (θ; X (0))]− b(θ)Pν(L(0) = 0)Eν[f (θ; X (0))|L(0) = 0

]= 0.

[a(θ) + b(θ)]Eν [f (θ; X (0))]− b(θ)(1− λ/µ)Eν[f (θ; X (0))|L(0) = 0

]= 0.

Scaling: changing θ to rθ for any θ ≤ 0 to get pre-limit BAR,

[a(rθ) + b(rθ)]φ(r)(θ)− b(rθ)(1− λ/µ)φ(r)0 (θ) = 0, (7)

where

φr (θ) ≡ E[erθLr (0)+a(rθ)Re(0)+b(rθ)Rs (0)] ≈ E[eθ(

rLr (0))],

φr0(θ) ≡ Eν

[f (rθ; X (0))|L(0) = 0

]= E[ea(rθ)Te(0)+b(rθ)Ts (0)] ≈ 1.

Jim Dai 25 / 37

Page 26: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Asymptotic expansion of coefficients in (7)

As θ → 0,

a(θ) ≈ −λ(θ + c2

e θ2/2), (8)

where

λ = 1E(Te) , c2

e = Var(Te)(E(Te))2 .

Fix θ ≤ 0. Using (8), as r → 0,

a(rθ) + b(rθ) ≈ −λ(rθ + r2θ2c2e /2)− µ(−rθ + r2θ2c2

s /2)= (µ− λ)rθ − r2θ2(λc2

e + µc2s )/2

= r2θ − r2θ2(λc2e + µc2

s )/2≈ r2θ − r2θ2µ(c2

e + c2s )/2. ( using λ = µ− r)

Also

− b(rθ)(1− λ/µ) ≈ (−µrθ + 1/2r2θ2µc2s )(1− λ/µ)

≈ (−µrθ)(1− λ/µ) = −r2θ.

Jim Dai 26 / 37

Page 27: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Limit BAR

Recall prelimit BAR (7)

[a(rθ) + b(rθ)]φ(r)(θ)− b(rθ)(1− λ/µ)φ(r)0 (θ) = 0.

Assume that as r → 0,

φr → φ, φr0 → φ0 = 1.

Dividing (7) by r2 and taking limit as r → 0, one has the following limit BARfor φ and φ0:[

θ2(λc2e + µc2

s )/2− θ]φ(θ) + θφ0(θ) = 0 for θ ≤ 0. (9)

Jim Dai 27 / 37

Page 28: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Analysis from the limit BAR (9)

Thus, for θ < 0, [θµ(c2

e + c2s )/2− 1

]φ(θ) + φ0(θ) = 0,

or

φ(θ) = φ0(θ)[θµ(c2

e + c2s )/2− 1

] = 1[θµ(c2

e + c2s )/2− 1

] ,Take θ ↑ 0, one has φ(0−) = φ0(0−) = 1, concluding that {rLr (0)} is tight.

Furthermore, as r → 0,

rLr (0)⇒ exponential with mean µ(c2e + c2

s )/2.

Jim Dai 28 / 37

Page 29: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Mgf approach works for generalized Jackson networks

ExternalArrivals

Re,1L1 Rs,1

L2 Rs,2

L3 Rs,30.7

0.3

0.5

0.5

P =

0 .5 .51 0 0.7 0 0

Braverman-Dai-Miyazawa (2017)

Jim Dai 29 / 37

Page 30: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

A sketch for proving Theorem 1

S1B1 B2 S2

B3class 1arrivals

class 3departures

f (θ,X (0)) = exp( 3∑

i=1θiLi (0) + a(θ1)Re(0)

+ b1(θ1 − θ2)Rs,1(0) + b2(θ2 − θ3)Rs,2(0) + b3(θ3)Rs,3(0)).

E[ea(θ1)Te

]= e−θ1 , E[eb1(θ1−θ2)Ts,1 ] = eθ1−θ2 ,

E[eb2(θ2−θ3)Ts,2 ] = eθ2−θ3 , E[eb3(θ3)Ts,3 ] = eθ3 .

Jim Dai 30 / 37

Page 31: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Pre-limit BAR

Restricted BAR

a(θ1)E[f (θ,X )] + b1(θ1 − θ2)E[f (θ,X )1{L1>0,L3=0}

]+ b2(θ2 − θ3)E

[f (θ,X )1{L2>0}

]+ b3(θ3)E

[f (θ,X )1{L3>0}

]= 0,

which is equivalent to[a(θ1) + (1− β3)b1(θ1 − θ2) + b2(θ2 − θ3) + β3b3(θ3)

]E[f (θ,X )]

+[b1(θ1 − θ2)− b3(θ3)

](E[f (θ,X )1{L3=0}

]− (1− β3)E

[f (θ,X )

])(10)

− b1(θ1 − θ2)E[f (θ,X )1{L1=0,L3=0}

]− b2(θ2 − θ3)E

[f (θ,X )1{L2=0}

]= 0.

Replacing θ by rθ, define

φr (θ) = E[f (rθ,X )], φr1(θ) = E

[f (rθ,X )|L1 = 0, L3 = 0

],

φr2(θ) = E

[f (rθ,X )|L2 = 0

].

Jim Dai 31 / 37

Page 32: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

State space collapse

Choosing θ3 to aid the removal of (10).Conduct expansion

b1(r(θ1 − θ2)

)≈ µ1

(r(θ1 − θ2)− r2(θ1 − θ2)2c2

s,1/2)),

b3(rθ3) ≈ µ3

(rθ3 − r2θ2

3c2s,3/2

).

Choose

θ1 ≤ 0, θ2 ≤ 0, θ3 = µ1µ3

(θ1 − θ2) ≤ 0. (11)

There are “enough” points θ = (θ1, θ2, θ3) satisfying (11).

Jim Dai 32 / 37

Page 33: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Limit BAR

Assume that, for any θ1 ≤ θ2 ≤ 0, and θ3 = (µ1/µ3)(θ1 − θ2),(φr (θ), φr

1(θ), φr2(θ)

)→(φ(θ1, θ2, θ3), φ1(θ2), φ2(θ1, θ3)

).

The limit satisfies

γ(θ1, θ2, θ3)φ(θ1, θ2, θ3) + (θ2 − θ3)φ2(θ1, θ3) + µ3θ3φ1(θ2) = 0

where

γ(θ1, θ2, θ3) = µ1(θ1 − θ2) + (θ2 − θ3) + quadratic term of θ.

Jim Dai 33 / 37

Page 34: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Tightness

Assume for any θ1 ≤ θ2 ≤ 0, and θ3 = (µ1/µ3)(θ1 − θ2). Ignoring the quadraticterm,

(θ2 − θ3)(φ2(θ1, θ3)− φ(θ1, θ2, θ3)

)+ µ3θ3

(φ1(θ2)− φ(θ1, θ2, θ3)

)= 0.

Setting θ2 = u ↑ 0, θ3 = −u2, and

θ1 = u − µ3µ1

u2 < 0,

one has

φ(0−, 0−, 0−) = φ2(0−, 0−).

Setting θ2 = 0 and using φ1(0) = 1, one has

φ(0−, 0, 0−)− φ2(0−, 0−) + µ3(1− φ(0−, 0, 0−)) = 0,

which implies that φ(0−, 0−, 0−) = φ2(0−, 0−) = φ(0−, 0, 0−) = 1.

Jim Dai 34 / 37

Page 35: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

BAR for a 5-class reentrant line

- -

-

-

-

-

m1 m2

m3 m4

m5

Station 1 Station 2α1

a(θ1)E[f (θ; X (0)

]+ b1(θ1 − θ2)E

[f (θ; X (0)1{L1>0,L3=0,L5=0}

]+ b2(θ2 − θ3)E

[f (θ; X (0)1{L2>0,L4=0}

]+ b3(θ3 − θ4)E

[f (θ; X (0)1{L3>0,L5=0}

]+ b4(θ4 − θ5)E

[f (θ; X (0)1{L4>0}

]+ b5(θ5)E

[f (θ; X (0)1{L5>0}

]= 0.

Jim Dai 35 / 37

Page 36: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Choice of θ3, θ4 and θ5 to aid SSC

µ1(θ1 − θ2) = µ3(θ3 − θ4) = µ5θ5, µ2(θ2 − θ3) = µ4(θ4 − θ5). (12)

“Not enough points θ < 0” satisfying (12).

Truncation on L3, L4, L5 is needed to allow positive values of θ3, θ4, θ5. Forexample, even if θ3 > 0,

erθ3Lr3(0) ≤ eθ3 if L3(0) < 1/r .

Sufficient to have

sup(r∈(0,r0])

E[Lr3(0)] <∞.

Jim Dai 36 / 37

Page 37: Stochastic processing networks: steady-state 23 diffusion ...Stochastic processing networks: steady-state 23 diffusion approximations Jim Dai School of Operations Research and Information

Open problem: not using Theorem 2

In proving a version of Theorem 1 for general multiclass queueing networksunder priority policies, we need the following strong SSC:

rEν [Lr3(∞)]→ 0.

When all distributions are exponential, the mean drift

E[Lr3(n + 1)]2|Lr

3(n) = x ]− x2 ≤ C1 − C2x ,

where C1,C2 > 0 do not depend on r . Thus,

E[Lr3(∞)] ≤ C1

C2.

SSC

rLr3(∞)⇒ 0.

can be proved using the BAR for this case. General?

Jim Dai 37 / 37