Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / [email protected] Lecture 7 Smart Scheduling...

50
Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / [email protected] Lecture 7 Smart Scheduling and Dispatching Policies

Transcript of Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / [email protected] Lecture 7 Smart Scheduling...

Page 1: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Eurecom, Sophia-AntipolisThrasyvoulos Spyropoulos / [email protected]

Lecture 7

Smart Scheduling and Dispatching Policies

Page 2: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Single Server Model (M/G/1)

2

Poissonarrival processw/rate l

Load = r lE[X]<1

• CPU Lifetimes of UNIX jobs [Harchol-Balter, Downey 96]• Supercomputing job sizes [Schroeder, Harchol-Balter 00]• Web file sizes [Crovella, Bestavros 98, Barford, Crovella 98]• IP Flow durations [Shaikh, Rexford, Shin 99]

Job sizes with huge variance are everywhere in CS:

Top-heavy:top 1% jobsmake up halfload

Huge Variability

X: job size (service requirement)

D.F.R.2

2

( )

[ ]X

Var XC

E X

28 50 is typicalXC

1

½

¼

xx 1}size JobPr{ ~

k x p

Bounded Pareto

Page 3: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Outline

Smart scheduling Performance metrics Policies classification Examples

Scheduling policies comparison (Fairness, Latency)

Task assignment problem Supercomputing and web server models Optimal dispatching/scheduling policies

3

Ladies first!+ they’ll go out first

Page 4: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Smart scheduling: Motivation (I)

4

Why scheduling matters?

Why doesn’t it work?!

Why doesn’t it work?!

Bla, bla, bla…

Why doesn’t it work?!

Bla, bla, bla…

!! Delay due to other users who are currently sharing the service !!

Grrrrr!

Page 5: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Smart scheduling: Motivation (II)

5

The goal of smart scheduling is to reduce mean delay “for free”, i.e., by simply serving jobs in the “right order”, no additional resources

Which is the right order to schedule jobs?

The answer strongly depends on - system load- job size distribution

Page 6: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Smart scheduling: Performance metrics (I) Common metrics to compare scheduling policies

E[T], mean response time

E[N], mean number (of jobs) in system

E[TQ], mean waiting time (= E[T]-E[S], where E[S]=service time)

Slowdown: SD=T/S (response time normalized by the running time)- Meaning: if a job takes twice as long to run due to system load, it

suffers from a Slowdown factor of 2, etc.- Job response time should be proportional to its running time.

Ideally:• small jobs → small response times• big jobs → big response times

6

Page 7: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Smart scheduling: Performance metrics (II) Starvation/fairness metrics

A low average Slowdown doesn’t necessarily mean fairness (starvation of large jobs)

Good metric: E[SD(x)] is the expected slowdown of a job of size x, i.e., mean slowdown as a function of x

E[SD]= E[T]/E[S]?No! First we need to derive:

Then, we get the mean SD:

7

)]([1)(

size has job |)]([ xTExx

xTExS

S

TExSDE

dxxfxTEx

dxxfxSDEdxxfxSDESDE S

x

S

x

S

x

)()]([1

)()]([)(] size has job |[][

Page 8: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Scheduling policies: classification

Definitions: Preemptive policy: a job may be stopped and then resumed

later from the same point where it was stopped

Size-based policy: it uses the knowledge of job size

Classification

Non-Preemptive, Non-Size-Based Policies Preemptive, Non-Size-Based Policies Non-Preemptive, Size-Based Policies Preemptive, Size-Based Policies

Focus on M/G/1 queue (General job size distribution)

8

Poisson, λ

Page 9: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Non-Preemptive, Non-Size-Based Policies (I)Non-preemptive policies (each job is run to completion), that don’t assume knowledge of job size, are:

FCFS (First-Come-First-Served) or FIFO Jobs are served in the order they arrive. Each job is run

to completion before next job receives service (e.g., call centers, supercomputing centers)

LCFS (Last-Come-First-Served non-preemptive) When the server frees up, it always chooses the last

arrived job and runs that job to completion (jobs piled onto a stack)

RANDOM When the server frees up, it chooses a random job to run

next (mostly of theoretical interest)

9

Page 10: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Non-Preemptive, Non-Size-Based Policies (II) Interesting property: All non-preemptive service orders

that do not make use of job sizes have the same distribution on the number of jobs in the system (time until completion is equal in distribution for all these policies) Hence, same E[T], E[N] What about E[SD]?

For all these policies (in an M/G/1):

Thus,

Since E[S], E[TQ] is the same for each policy → They have the same E[SD]

10

][2

][

1][

2

SE

SETE Q

SETE

S

STE

S

TESDE Q

Q 1][1][

Independently of the job’s size!

Proportional to job size variability!

Page 11: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Preemptive, Non-Size-Based Policies (I)

So far: non-preemptive/non-size-based service E[T] can be very high when job size variability is high Intuition: short jobs queue up behind long jobs

Processor-Sharing (PS): when a job arrives, it immediately shares the capacity with all the current jobs (Ex. R.R. CPU scheduling)

+ PS allows short jobs to get out quickly, helps to reduce E[T], E[SD] (compared to FCFS), increases system throughput (different jobs run simultaneously)

- PS is not better than FCFS on every arrival sequence

+ Mean response time for PS is insensitive to job size variability: E[T]M/G/1/PS= E[S] / (1-ρ)where ρ is the system utilization (load)

11

Page 12: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Preemptive, Non-Size-Based Policies (II)

Performances of M/G/1/PS system Response time

- Think of Little’s law!

Mean Slowdown

!!! Constant Slowdown (independent of the size x)!!! In non-preemptive, non-size-based scheduling: E[SD] for

small jobs was greater than the one for large jobsUnder PS, all jobs have same Slowdown → FAIR

Scheduling

12

1)]([

xxTE

1

1][)]([ SDExSDE

Page 13: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Preemptive, Non-Size-Based Policies (III) Preemptive-LCFS: a new arrival preempts the job in

service, when that arrival completes, the preempted job is resumed E[T(x)], E[SD(x)] as for the PS case + wrt PS: less # of preemptions (only 2 per job)

Can we drop E[SD(x)]? → lower SD for smaller jobs

!!! We don’t know the size of the jobs!

FB (Foreground-Background) or LAS - Least Attained Service Idea: To reduce E[SD] → use knowledge of job’s age (indicator

of remaining CPU demand), and serve the job with lowest age13

Page 14: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Foreground-Background scheduling (cont’d) Used to control execution of multiple processes on a single

processor: two queues (F and B) and one server

Idea of FB: The job with the lowest CPU age gets the CPU to itself. If several jobs have same lowest CPU age, they share the CPU

using PS

Performance depends on how good is the predictor of the age of remaining size (depends on job size distribution)!!

14

Jobs enter queue F

(PS service)

When a job hits a certain age a, it is moved to queue B

Jobs in B get service only when queue F is empty

Page 15: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Non-Preemptive, Size-Based Policies (I)

Size-based policies: special case of Priority Queueing Often used in computer systems, e.g., database

(differentiated levels of service), scheduling of HTTP requests, high/low-priority transactions

Size-based scheduling: it can improve the performance of a system tremendously!

Priority queueing (non-preemptive) Consider an M/G/1 priority queue with n classes: class 1 has

highest priority, n the lowest Class k job arrival rate is λk= λ pk Time in queue for jobs of priority k is

15

)1)(1(

][2

][

)]([ 1

11

2

k

i

k

i

Q

SE

SE

kTE

Waiting for the job in service

Waiting for the jobs in the queue of ≥ priority

Waiting for the jobs of higher priority arriving after k

E[TQ(k)]NP-Priority < E[TQ]FCFC

+ k-class job only see load due to jobs of class ≤ k

Page 16: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Non-Preemptive, Size-Based Policies (II)

Question: If you want to minimize E[T], who should have higher priority: large or small jobs?

Theorem: Consider an NP-Priority M/G/1 with two classes of jobs: small (S) and large (L). To minimize E[T], class S jobs should have priority over class L jobs (since E[SS]<E[SL])

SJF - Non-preemptive Shortest Job First Whenever the server is free, it chooses the job with the smallest

size (once a job is running, it is never interrupted)

- Under heavy-tailed distributions, E[TQ] is smaller than the FCFS one (since most jobs are small)

- But, mean delay is proportional to the variance → large delays for very high variance

- Small jobs can still get stuck behind a big one (already running) → need of preemption!

16

Page 17: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Preemptive, Size-Based Policies

So far: non-preemptive policies → higher delay under highly variable job size distributions

Preemptive priority queueing

PSJF - Preemptive Shortest Job First Similar to SJF policy, the job in service is the job with the

smallest original size A preemption occurs if a smaller job arrives Mean response time far lower than under SJF (PSJF is far less

sensitive to variability in job size distr.)

17

Better compared to non-preemptive, it depends only on the first k priority classes variability!

Page 18: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

SRPT

SRPT - Shortest Remaining Processing Time Whenever the server is free, the job chosen is the one with

shortest remaining processing time

Preemptive policy: a new arrival may preempt the current job in service if it has shorter remaining processing time

Compared to PSJF- SRPT takes into account of remaining service requirement

and not just the original job size- Overall mean response time is lower

Compared to FB- In SRPT, a job gains priority as it receives more service- In FB, a job has highest priority when it first enters- In an M/G/1 → E[T(x)]SRPT ≤ E[T(x)]FB

18

Page 19: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Policies comparison: mean response time (I)

19

SRPT

FCFS SJF

24

LASLAS

FCFS SJF

20

LAS

FCFS SJF

16

LAS

FCFS SJF

12

FCFS SJF

LAS

8

r

PS

C2 =

E[T]

M/G/1 queue, job size distribution is Bounded Pareto

Source: Prof. Mor Harchol-Balter, http://www.cs.cmu.edu/~harchol/

Versus C2 SJF/FCFS delay increases

with C2 LAS delay decreases

with C2 (DFR needs higher C2)

PS and SRPT are invariant to C2

Versus ρ SJF/FCFS delay very high

even for low ρ SRPT/LAS delay slightly

increases with ρ SRPT has the lowest

delay

(C 2, is the squared coefficient of variation)

Page 20: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Policies comparison: mean response time (II)

20

Weibull distribution, ρ=0.7

Source: Prof. Mor Harchol-Balter, http://www.cs.cmu.edu/~harchol/

Invariant to job size variability

Fast increase with C2

Requires higher C2 to perform well

Page 21: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Exercise

21

M/G/1 queue Job size distribution: Bounded Pareto The load is ρ = 0.9 The (very) biggest job in the job size distribution has size x = 1010Question: E[T(x)] is lower under SRPT scheduling or under PS scheduling?

Source: Prof. Mor Harchol-Balter, http://www.cs.cmu.edu/~harchol/

Page 22: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Exercise: Solution

22

Small jobs should favor SRPT Large jobs have the lowest priority under SRPT, but they get

treated equally under PS (equal time-sharing) Thus, it seems much better for “Mr. Max” to go to the PS queue

E[T(x)]PS should be far lower than E[T(x)]SRPT

Source: Prof. Mor Harchol-Balter, http://www.cs.cmu.edu/~harchol/

Page 23: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Exercise: Solution (cont’d)

23

The largest job prefers SRPT to PS, but almost all jobs (99.9999%) prefer SRPT to PS by more than a factor of 2 99% of jobs prefer SRPT to PS by more than a factor of

5 But how can this be? Can every job really do better in

expectation under SRPT than under PS? All-Can-Win Theorem! (for BP distribution holds for ρ < 0.96)

Source: Prof. Mor Harchol-Balter, http://www.cs.cmu.edu/~harchol/

< 5 times

Page 24: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

SRPT: Fairness

SRPT is optimal wrt mean response time In practice, not used for scheduling job

Job size is not always known PS is preferred in web servers, unless serving static requests

What about Fairness?

A policy is fair if each job has the same expected SD, regardless of its size

SRPT vs PS? SRPT worse with large jobs? All-Can-Win Theorem: in an M/G/1, if ρ<0.5, → E[T(x)]SRPT ≤

E[T(x)]PS (for all distribution, for all x)- Intuition: Once a large job starts to get the service, it gains priority;

under light load even a job of large size x could do worse under PS than under SRPT (because of higher residence time)

24

Page 25: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Summary on scheduling single server (M/G/1): E[T]

Let’s order the policies based on E[T]:

25

Poissonarrival process

Load r <1

LOW E[T] HIGH E[T]

SRPT < LAS < PS < SJF < FCFS

OPT for allarrival

sequences

Surprisingly bad:(E[S2] term)

Requires D.F.R. (Decreasing Failure Rate)

No “Starvation!” Even the biggest jobs prefer SRPT to PS

~E[S2](shorts caught behind longs)

Insensitiveto E[S2]

Source: Prof. Mor Harchol-Balter, http://www.cs.cmu.edu/~harchol/

Smart scheduling greatly improves mean response time (e.g., SRPT)

Variability of job size distribution is key

Page 26: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Multiserver Model

Sched. policy

Router

Server farms: + Cheap + Scalable capacity

Sched. policy

Sched. policyRouting

(assignment)policyIncoming

jobs:Poisson Process

2 Policy Decisions(Sometimes scheduling policy is fixed – legacy system)

26

Page 27: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Outline

I. Review of scheduling in single-server

FCFS

FCFSRouter

II. Supercomputing

III. Web server farm model

PSRouter

PS

IV. Towards Optimality …

SRPT

SRPT

SRPTRouter&

Metric: Mean

Response Time, E[T]

27

Page 28: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Supercomputing Model

FCFS

Router

FCFS

FCFSRouting

(assignment)policy

Poisson Process

Jobs are not preemptible.

Assume hosts are identical. Jobs i.i.d. ~ G: highly variable size distribution.

Jobs processed in FCFS order.

Size may or may not be known. Initially assume known.28

Page 29: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Q: Compare Routing Policies for E[T]?

FCFS

Router

FCFS

FCFS

Routingpolicy

Poisson Process

Jobs i.i.d. ~ G: highly variable

Supercomputing

1. Round-Robin

2. Join-Shortest-Queue Go to host w/ fewest # jobs.

3. Least-Work-Left Go to host with least total work.

5. Central-Queue-Shortest-Job (M/G/k/SJF)

Host grabs shortest job when free.

6. Size-Interval Splitting Jobs are split up by size among

hosts.

29

Page 30: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Supercomputing model (II)

30

1. Round-RobinJobs assigned to hosts (servers) in a cyclical fashion

2. Join-Shortest-Queue Go to host with fewest # jobs

3. Least-Work-Left (equalize the total work)

Go to host with least total work (sum of sizes of jobs there)

4. Central-Queue-Shortest-Job (M/G/k/SJF) Host grabs shortest job when free

5. Size-Interval Splitting Jobs are split up by size among hosts. Each host is

assigned to a size interval (e.g., Short/Medium jobs go to the first host, Long jobs go to the second host)

HighE[T]

LowE[T]

Hp: Job size is known!

Page 31: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

What if job size is not known?

The TAGS algorithm “Task Assignment by Guessing Size”

31

Answer:When job reaches size limit for host, then itis killed and restarted from scratch at next host.

m

s

OutsideArrivals

Host 1

Host 2

Host 3

[Harchol-Balter, JACM 02]

Page 32: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Results of Analysis

32

Lowervariability

Highvariability

TAGS

Random

Least-Work-Left

Bounded Pareto Jobs

Page 33: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Supercomputing model (III)

SummaryThis model is stuck with FCFS at servers. It is important to find a routing/dispatching policy that helps smalls not be stuck behind bigs → Size-Interval Splitting By isolating small jobs, can achieve effects of smart single-server policies Greedy routing policies (JSQ, LWL) are poor (don’t provide isolation for smalls, not good under high variability workloads) Don’t need to know size (TAGS = Task Assignment by Guessing Size)

33

Page 34: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Web server farm model (I)

Examples: Cisco Local Director, IBM Network Dispatcher, Microsoft SharePoint, etc.

34

Router

Routingpolicy

Poisson Process

PS

PS

PS

HTTP requests are immediately dispatched to server Requests are fully preemptible Processor-Sharing (HTTP request receives “constant”

service) Jobs i.i.d. with distribution G (heavy tailed job size distr. for

Web sites)

Page 35: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Web server farm model (II)

35

1. Random

2. Join-Shortest-Queue

Go to host with fewest # jobs

3. Least-Work-Left Go to host with least

total work

4. Size-Interval Splitting

Jobs are split up by size among hosts

PS farm 1[ ]

1ii i

E T pp

M/G/1/PS 1[ ]

1E T

Shortest-Queue is better (high variance distr.)

Same for E[T], but not great

E[T]

JSQLWL

RAND SIZE

8 servers, r = .9, C2=50

Source: Prof. Mor Harchol-Balter, http://www.cs.cmu.edu/~harchol/

Page 36: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Optimal dispatching/scheduling scheme (I) What is the optimal dispatching + scheduling pair?

Central-queue-SRPT looks very good

Is Central-queue-SRPT always optimal for server farm?

No!! It does not minimize E[T] on every arrival sequence!

Practical issue: jobs must be immediately dispatched (cannot be held in a central queue)!!

Assumptions: Jobs are fully preemptible within queue Jobs size is known

36

SRPT

Page 37: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Optimal dispatching/scheduling scheme (II)

37

Router

SRPTImmediatelyDispatch Jobs

Incomingjobs SRPT

Claim: The optimal dispatching/scheduling pair, given immediate dispatch, uses SRPT at the hosts

Intuition: SRPT is very effective at getting short jobs out → it reduces E[N], thus the mean response time E[T] (Little’s law)→ narrow search to policies with SRPT at hosts!

Page 38: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Optimal dispatching/scheduling scheme (III)Optimal immediate dispatching policy is not obvious!

RANDOM task assignment performs well: each queue looks like an M/G/1/SRPT queue with arrival rate λ/k

Idea: short jobs spread out over SRPT servers → IMD algorithm (Immediate Dispatching) Divide jobs into size classes (e.g., small, medium, large) and

assign jobs to the server with fewest # of jobs of that size class

Each server should have some small, some medium and some large jobs (so that SRPT can be maximally effective)

IMD performance is as good as Central-Queue-SRPT

Almost no stochastic analysis (analysis available for worst-cases)!

38

Page 39: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Summary

39

FCFS

FCFSRouter

• Need Size-interval splitting to combat job size variability and enable good performance

Supercomputing

Web server farm model

PSRouter

PS

• Job size variability is not an issue

• LWL, JSQ, performs well

Optimal dispatching/scheduling pairSRPT

SRPTRouter+

• Both have similar worst-case E[T]

• Almost exclusively worst-case analysis, so hard to compare with above results

• Need stochastic research

SRPT

Source: Prof. Mor Harchol-Balter, http://www.cs.cmu.edu/~harchol/

Page 40: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Exercises

40

Ex. 1 – Slowdown Jobs arrive at a server which services them in FCFS order. The

average arrival rate is λ = 1/2 job/sec. The job sizes (service times) are independently and identically distributed according to random variable S where: S=1 with prob. ¾, S=2 o.w.

Suppose: E[T] = 29/12. Compute the mean slowdown, E[SD], where the slowdown of job j is defined as Slow(j) = T(j)/S(j), where T(j) is the response time of job j and S(j) is the size of job j.

Solution: Recall the definition of response time for a FCFS queue: T = TQ + S.

Here, TQ is the waiting or queueing time. Thus,

S

TE

S

STESDE QQ 1][

Page 41: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Exercises

41

Since the server is FCFS, a particular job’s waiting time is independent of its service time. This fact allows us to break up the expectation, giving us:

The distribution of S is given, so we calculate E[S] and E[1/S] using the definition of expectation: E[S] =5/4 and E[1/S]=7/8

Then, we get E[SD] =1+[(29/12 – 5/4)7/8]=97/48.

If the service order had been SJF, would the same technique have worked for computing mean slowdown? In the SJF case, S and TQ are not independent, so we can’t split the expectation as we

did above. The reason why they are not independent is because job size affects the queueing order: short jobs get to jump to the front of the queue under SJF, and hence their TQ is shorter.

SESETE

SETESDE Q

1])[][(1

1][1][

Page 42: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Exercises

42

Ex. 2 – FCFS-SJF-RR CPU scheduling

Compute the average waiting time for processes with the following next CPU burst times (ms) and ready queue order:

1. P1: 202. P2: 123. P3: 84. P4: 165. P5: 4

Page 43: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Exercises

43

Solution FCFS:

Waiting time:

• T1=0

• T2=20

• T3=32

• T4=40

• T5=56

Average waiting time: 148/5=29.6

P1 P2 P3 P4 P5

0 20 32 40 56 60

+ Very simple algorithm- Long waiting time!

Page 44: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Solution SJF:

Waiting time:

• T1=40

• T2=12

• T3=4

• T4=24

• T5=0

Average waiting time: 16

P5 P2P3 P4 P1

0 4 12 4024 60

Exercises

44

+ Shorter average waiting time- Requires future knowledge

Page 45: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Solution RR scheduling: Give each process a unit of time (time slice, quantum) of execution on CPU. Then move to next process in the queue. Continue until all processes completed. Hp: Time quantum of 4.

Exercises

45

P1

0

P2 P3 P4 P5 P1 P2 P3 P4 P1 P2 P4 P1 P4 P1

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60

P5 completes P3 completes P2 completes P4 completes

P1 completes

Page 46: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Waiting time:

• T1: 60-20=40

• T2: 44-12=32

• T3: 32-8=24

• T4: 56-16=40

• T5: 20-4=16

Average waiting time: 30.4

Exercises

46

- Ave. waiting time high

+ Good ave. response time (Important for interactive/time-sharing systems)

• Use of smaller quantum (overhead increase)

Same exercise with other scheduling disciplines!

P1

0

P2 P3 P4 P5 P1 P2 P3 P4 P1 P2 P4 P1 P4 P1

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60

P5 completes P3 completes P2 completes P4 completes

P1 completes

Page 47: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Exercises

47

Ex. 3 - LCFSDerive the mean queueing time E[TQ]LCFS. Derive this by conditioning on whether an arrival finds the system busy or idle.Solution: With probability 1 − ρ, the arrival finds the system idle.

In that case E[TQ] = 0. With probability ρ, the arrival finds the system busy and

has to wait for the whole busy period started by the job in service.

The job in service has remaining size Se. Thus the arrival has to wait for the expected duration of a busy period started by Se, which we denote by E[B(Se)] = E[Se]/(1−ρ).

Page 48: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Exercises

48

You can derive this fact by first deriving the mean length of a busy period started by a job of size x, namely E[B(x)] = x /(1−ρ) , and then deriving E[B(Se)] by conditioning on the probability that Se equals x.

Putting these two pieces together, we have

As expected, this is exactly the mean queueing time under FCFS.

][)1(1

][0)1(][ e

eLCFSQ SE

SETE

Page 49: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Exercises

49

Ex. 4 – Server Farm Suppose you have a distributed server system

consisting of two hosts. Each host is a time-sharing host. Host 1 is twice as fast as Host 2.

Jobs arrive to the system according to a Poisson process with rate λ = 1/9.

The job service requirements come from some general distribution D and have mean 3 seconds if run on Host 1.

When a job enters the system, with probability p = 3/4 it is sent to Host 1, and with probability 1 − p = 1/4 is sent to Host 2.

Question: What is the mean response time for jobs?

Page 50: Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / spyropou@eurecom.fr Lecture 7 Smart Scheduling and Dispatching Policies.

Thrasyvoulos Spyropoulos / [email protected] Eurecom, Sophia-Antipolis

Exercises

50

PS

PS

Poisson (1/9)

3/4

1/4

3 sec.

6 sec.Solution: The mean response time is simply: E[T] = (Mean response time at server 1)+ (Mean response time ¾ ¼at server 2) But server 1 (2) is just an M/G/1/PS server, which has the same

mean response time as an M/M/1/FCFS server, namely just

Thus,

5

36

4

1

9

1

6

111

][22

2

TE

sec5

24

5

36

4

14

4

3][ TE

4

43

91

31

11][

111

TE