iiwas 2010

13
1 /13 /13 Michele Michele Stecca Stecca cipi Centro di Ricerca sull’Ingegneria delle Piattaforme Informatiche Thread Management in Mashup Thread Management in Mashup Execution Platforms Execution Platforms Michele Stecca and Massimo Maresca Computer Platform Research Center (CIPI) University of Padova & Genova (Italy) cipi Paris – Gennevilliers November 10, 2010

description

 

Transcript of iiwas 2010

Page 1: iiwas 2010

11/13/13 Michele SteccaMichele Stecca

cipi

Centro di Ricerca sull’Ingegneria delle Piattaforme Informatiche

Thread Management in Mashup Execution Thread Management in Mashup Execution PlatformsPlatforms

Michele Stecca and Massimo Maresca

Computer Platform Research Center (CIPI)

University of Padova & Genova (Italy)

cipi

Paris – Gennevilliers

November 10, 2010

Page 2: iiwas 2010

22/13/13 Michele SteccaMichele Stecca

cipi

Centro di Ricerca sull’Ingegneria delle Piattaforme Informatiche

AgendaAgenda

1.1. IntroductionIntroduction

2.2. Overview of the PlatformOverview of the Platform

3.3. Classification of Service ComponentsClassification of Service Components

4.4. Case Study: Polling ServicesCase Study: Polling Services

5.5. Conclusions and Future WorkConclusions and Future Work

Page 3: iiwas 2010

33/13/13 Michele SteccaMichele Stecca

cipi

Centro di Ricerca sull’Ingegneria delle Piattaforme Informatiche

1. Introduction (1/4)1. Introduction (1/4)

ScenarioScenario Availability of contents and services through Web 2.0 technologies such Availability of contents and services through Web 2.0 technologies such

as RSS Feed, Atom, REST-WS, SOAP-WS, etc.as RSS Feed, Atom, REST-WS, SOAP-WS, etc. Availability of tools for the rapid development of convergent Composite Availability of tools for the rapid development of convergent Composite

Services (a.k.a. Mashups) that combine different resources/contents such Services (a.k.a. Mashups) that combine different resources/contents such as Yahoo Pipes!, JackBe Presto, etc. as Yahoo Pipes!, JackBe Presto, etc.

Page 4: iiwas 2010

44/13/13 Michele SteccaMichele Stecca

cipi

Centro di Ricerca sull’Ingegneria delle Piattaforme Informatiche

1. Introduction (2/4)1. Introduction (2/4)Mashup classification based on execution platform locationMashup classification based on execution platform location

Client sideClient side

Server Server sideside

SC1SC2

SCN

User Node (Browser)

SC1

SC2

SCNBrowser (User)

Mashup Engine (Server)Request

Results

The analysis is about Server Side Mashup Execution PlatformsThe analysis is about Server Side Mashup Execution Platforms Long Running Executions (i.e., user can be also “offline”)Long Running Executions (i.e., user can be also “offline”) User Terminal power consumption and availability (e.g., it may be disconnected)User Terminal power consumption and availability (e.g., it may be disconnected) Security (e.g., private data, malicious components, etc.)Security (e.g., private data, malicious components, etc.)

Page 5: iiwas 2010

55/13/13 Michele SteccaMichele Stecca

cipi

Centro di Ricerca sull’Ingegneria delle Piattaforme Informatiche

1. Introduction (3/4)1. Introduction (3/4)We refer to We refer to Event Driven MashupsEvent Driven Mashups (i.e., Composite Services in which Service (i.e., Composite Services in which Service

Components generate events during the execution)Components generate events during the execution)

RemarksRemarks Server Side execution model (long running Mashups)Server Side execution model (long running Mashups) Event driven model to cope with Telecom Operator services (calls, SMS, etc.) Event driven model to cope with Telecom Operator services (calls, SMS, etc.)

Page 6: iiwas 2010

66/13/13 Michele SteccaMichele Stecca

cipi

Centro di Ricerca sull’Ingegneria delle Piattaforme Informatiche

1. Introduction (4/4)1. Introduction (4/4)

Thread ManagementThread Management The Mashup Execution Engine must manage a huge number of The Mashup Execution Engine must manage a huge number of

concurrent Mashup executions called Sessionsconcurrent Mashup executions called Sessions Throughput, Latency and Scalability are the key issues to support the Throughput, Latency and Scalability are the key issues to support the

efficient execution of Mashup Sessions efficient execution of Mashup Sessions The number of concurrent Threads highly influences the Mashup The number of concurrent Threads highly influences the Mashup

Execution Engine performanceExecution Engine performance We have explored the following two design choices:We have explored the following two design choices:

Using an already existing standard platform (e.g., JEE Using an already existing standard platform (e.g., JEE compliant platforms like red Hat JBoss, Sun Glassfish, etc.)compliant platforms like red Hat JBoss, Sun Glassfish, etc.) General purpose Thread ModelGeneral purpose Thread Model

Implementing the execution platform from scratch Implementing the execution platform from scratch Mashup specific Thread Model (i.e., complete control on Mashup specific Thread Model (i.e., complete control on

resources consumption)resources consumption)

Page 7: iiwas 2010

77/13/13 Michele SteccaMichele Stecca

cipi

Centro di Ricerca sull’Ingegneria delle Piattaforme Informatiche

2. Overview of the platform (1/2)2. Overview of the platform (1/2)

The Orchestrator executes The Orchestrator executes the Logic of the Composite the Logic of the Composite ServiceService

Each Service Proxy Each Service Proxy Wraps one external Wraps one external

functionalityfunctionality Interacts with the Orchestrator Interacts with the Orchestrator

through a standard interfacethrough a standard interface Interacts with the external Interacts with the external

resource through a specific resource through a specific protocol protocol

External resources are External resources are made available by 3rd made available by 3rd PartiesParties

Page 8: iiwas 2010

88/13/13 Michele SteccaMichele Stecca

cipi

Centro di Ricerca sull’Ingegneria delle Piattaforme Informatiche

2. Overview of the platform (2/2)2. Overview of the platform (2/2)

From a Platform (Container)-based solution to the Monolithic approachFrom a Platform (Container)-based solution to the Monolithic approach

Page 9: iiwas 2010

99/13/13 Michele SteccaMichele Stecca

cipi

Centro di Ricerca sull’Ingegneria delle Piattaforme Informatiche

3. Service Component Classification3. Service Component Classification Call-Response:Call-Response: the service is exposed by the service provider to the service is exposed by the service provider to

consumers through a synchronous invocation (e.g., a Web consumers through a synchronous invocation (e.g., a Web Service). The consumer invokes the service and blocks until the Service). The consumer invokes the service and blocks until the service provider returns the resultservice provider returns the result

Polling:Polling: the service is exposed by the service provider to the service is exposed by the service provider to consumers through a synchronous invocation (e.g., a Web consumers through a synchronous invocation (e.g., a Web Service) or through a syndication technology (e.g., an RSS Feed). Service) or through a syndication technology (e.g., an RSS Feed). The consumer keeps on polling the service and retrieves the The consumer keeps on polling the service and retrieves the desired content when availabledesired content when available

Callback:Callback: the service is exposed through a synchronous the service is exposed through a synchronous invocation (e.g., a Web Service). The consumer issues such an invocation (e.g., a Web Service). The consumer issues such an invocation to activate the external service and to configure a invocation to activate the external service and to configure a “callback URL”, to be used by the external service to notify events “callback URL”, to be used by the external service to notify events that are of interest for the consumer.that are of interest for the consumer.

Page 10: iiwas 2010

1010/13/13 Michele SteccaMichele Stecca

cipi

Centro di Ricerca sull’Ingegneria delle Piattaforme Informatiche

4. Case study: Polling Services (1/2)4. Case study: Polling Services (1/2)

Trivial Solution: One Thread for each active Polling Trivial Solution: One Thread for each active Polling Service invocationService invocation

Proposed Solution: One Thread (see code below) + Proposed Solution: One Thread (see code below) + Fixed Thread Pool for each Polling ServiceFixed Thread Pool for each Polling Service

1. While (true) {1. While (true) {

2. 2. Sleep (Period)Sleep (Period)

3. 3. For each entry in <Set of Active Invocations> {For each entry in <Set of Active Invocations> {

4. 4. Extract the input propsExtract the input props

5.5. Use input props to access the external resourceUse input props to access the external resource

6. 6. if (the desired state change occurred) { if (the desired state change occurred) {

7.7. Create a new Task TCreate a new Task T

8.8. Submit T to the Thread Pool for executionSubmit T to the Thread Pool for execution

99 }//end if}//end if

10.10. }//end For each}//end For each

11.}//end while11.}//end while

Page 11: iiwas 2010

1111/13/13 Michele SteccaMichele Stecca

cipi

Centro di Ricerca sull’Ingegneria delle Piattaforme Informatiche

4. Case study: Polling Services (2/2)4. Case study: Polling Services (2/2)

Comparison of the two solutions:Comparison of the two solutions: Memory Consumption (due to the allocation of Thread Stack Memory Consumption (due to the allocation of Thread Stack

+ Objects in the JVM Heap): + Objects in the JVM Heap): Uncontrolled in the trivial case (it depends on the number of Service Uncontrolled in the trivial case (it depends on the number of Service

Invocations)Invocations) Bounded in the proposed solution (proportional to Bounded in the proposed solution (proportional to 1 + Thread Pool 1 + Thread Pool

sizesize))

Response time (influenced by the heavy Thread creation Response time (influenced by the heavy Thread creation procedure): procedure):

250000 requests satisfied in 41,3 secs in the trivial case250000 requests satisfied in 41,3 secs in the trivial case 250000 requests satisfied in 20,6 secs in the proposed solution250000 requests satisfied in 20,6 secs in the proposed solution The difference is due to the Thread creation time: The difference is due to the Thread creation time:

0,08 ms * 250000 = 20 secs0,08 ms * 250000 = 20 secs

Note: 0.08 secs = creation time of a single Thread in the testing machineNote: 0.08 secs = creation time of a single Thread in the testing machine

Page 12: iiwas 2010

1212/13/13 Michele SteccaMichele Stecca

cipi

Centro di Ricerca sull’Ingegneria delle Piattaforme Informatiche

5. Conclusions and Future Work5. Conclusions and Future Work We propose a classification of the Component Service We propose a classification of the Component Service

activation paradigms:activation paradigms: Call-ResponseCall-Response PollingPolling CallbackCallback

We propose an ad-hoc Thread Management model based on We propose an ad-hoc Thread Management model based on such a classification to overcome the limitations of general such a classification to overcome the limitations of general purpose platformspurpose platforms

We have presented some preliminary performance test We have presented some preliminary performance test resultsresults

Future Work:Future Work: Analysis of the interaction between the JVM MM / Operating System Analysis of the interaction between the JVM MM / Operating System

MMMM Effects of the Garbage Collections on the system performanceEffects of the Garbage Collections on the system performance

Page 13: iiwas 2010

1313/13/13 Michele SteccaMichele Stecca

cipi

Centro di Ricerca sull’Ingegneria delle Piattaforme Informatiche

Thank you for Thank you for your attentionyour attention