SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

19
1 SEDA: An Architecture for Well-Conditioned, Scalable Internet Services Matt Welsh, David Culler, and Eric Brewer Computer Science Division University of California, Berkley

description

SEDA: An Architecture for Well-Conditioned, Scalable Internet Services. Matt Welsh, David Culler, and Eric Brewer Computer Science Division University of California, Berkley. SEDA. An Application for:. servicing requests for Internet services - PowerPoint PPT Presentation

Transcript of SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

Page 1: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

1

SEDA: An Architecture for Well-Conditioned, Scalable Internet

Services

Matt Welsh, David Culler, and Eric Brewer

Computer Science DivisionUniversity of California, Berkley

Page 2: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

2

An Application for:

servicing requests for Internet services servicing a load (demand) that is never constant providing a service that scales to load

keep in mind these issues of scalability: load fluctuates, sometimes wildly

you only want to allot the resources servicing the load requires

every system has a limit! Scale responsibly! don’t over commit the system

SEDA

Page 3: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

3

Design Goals:

service requests for BOTH static content AND dynamic content

demand for dynamic content is increasing

work to deliver static content is predictable

retrieve the static content from disk or cache

send it back on the network

SEDA

Page 4: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

4

Design Goals:

work to deliver dynamic content is unknown build content on the fly how many I/Os to retrieve the content? layout, insert, and format content may require substantial computation posing queries of a database and

incorporating results

all the more reason to scale to load!

SEDA

Page 5: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

5

Design Goals:

adaptive service logic

different levels of load require different service strategies for optimum response

load is determined by BOTH the number of requests for service and the work the server must do to answer them

platform independence by adapting to resources available

load is load, no matter what your system is

SEDA

Page 6: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

6

Consider (just) Using Threads:

the ‘model’ of concurrency

two ways to just use threads for servicing internet requests:

unbounded thread allocation

bounded thread pool (Apache)

hint: remember !! (somewhat different reasons though!)

SEDA

Threads!

Page 7: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

7

Consider (just) Using Threads:

unbounded thread allocation

issues thread per request

too many use up all the memory

scheduler ‘thrashes’ between them, CPU spends all its time in context switches

bounded thread pool

works great until every thread is busy

requests that come in after suffer unpredictable waits --> ‘unfairness’

SEDA

Page 8: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

8

Consider (just) Using Threads:

transparent resource virtualization (!)

fancy term for the virtual environment threads/processes run in

threads believe they have everything to themselves, unaware of constraint to share resources

t. r. v. = delusions of grandeur!

no participation in system resource management decisions, no indication of resource availability to adapt its own service logic

SEDA

Page 9: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

9

DON’T Consider Using Threads!SEDA

threaded server throughput degradation

as the number of threads spawned by the system rises, the ability of the system to do work declines

throughput goes down, latency goes up

Page 10: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

10

Consistent Throughput is KEY:

well-conditioned service: sustain throughput by acting like a pipeline

break down servicing of requests into stages

each stage knows its limits and does NOT over provision

so maximum throughput is a constant no matter the varying degree of load

if load exceeds capacity of the pipeline, requests get queued and wait their turn. So latency goes up, but throughput remains the same.

SEDA

Page 11: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

11

Opaque Resource Allocation:

incorporate Event-Driven Design in the pipeline

remember -- only one thread!

no context switch overhead

paradigm allows for adaptive scheduling decisions

adaptive scheduling and responsible resource management are the keys to maintaining control and not over-committing resources to account for unpredictable spikes in load

SEDA

Page 12: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

12

Back to the argument:

Event-driven Programming vs. Using Threads

the paper recommends event-driven design as the key to effective scalability, yet it certainly doesn’t make it sound as easy as Ousterhout

event handlers as finite state machines

scheduling and ordering of events ‘all in the hands’ of the application developer

certainly looks at the more complex side of event-driven programming

SEDA

Page 13: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

13

Anatomy of SEDA:

Staged Event Driven Architecture a network of stages

one big event becomes series of smaller events improves modularity and design

event queues managed separately from event handler

dynamic resource controllers (it’s alive!) allow apps to adjust dynamically to load

culmination is a managed pipeline

SEDA

Page 14: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

14

Anatomy of SEDA:

a queue before each stage

stages queue events for other stages

note the modularity

biggest advantage is that each queue can be managed individually

SEDA

Page 15: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

15

Anatomy of a Stage:

event handler

code is provided by application developer

incoming event queue

for portions of requests handled by each stage

thread pool

threads dynamically allocated to meet load on the stage

controller

oversees stage operation, responds to changes in load, locally and globally

SEDA

Page 16: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

16

Anatomy of a Stage:

thread pool controller manages the allocation of threads. Number is determined by length of queue

batching controller determines the number of events to process in each invocation of the handler

SEDA

Page 17: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

17

Asynchronous I/O:

SEDA provides two asynchronous I/O primitives

asynchronous = non-blocking

asynchronous socket I/O

intervenes between sockets and application

asynchronous file I/O

intervenes to handle file I/O for application

different in implementation, but design of each uses event-driven semantics

SEDA

Page 18: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

18

Performance:

througput is sustained!

SEDA

Page 19: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

19

Summary:

load is unpredictable

most efficient use of resources is to scale to load without over committing

system resources must be managed dynamically and responsibly

staged network design culminates in a well-conditioned pipeline that manages itself

event-driven design for appropriate scalability

threads are used in stages when possible

SEDA