Hasthi talk at ICWS 2009

26
ENFORCING USER-DEFINED MANAGEMENT LOGIC IN LARGE SCALE SYSTEMS Srinath Perera, Dennis Gannon Indiana University, Bloomington 1

description

You can find a detailed presentation of this under http://www.slideshare.net/hemapani/dissertation-defence-enforcing-userdefined-management-logic-in-large-scale-systems.

Transcript of Hasthi talk at ICWS 2009

Page 1: Hasthi talk at ICWS 2009

ENFORCING USER-DEFINED MANAGEMENT LOGIC IN LARGE SCALE SYSTEMS

Srinath Perera, Dennis Gannon Indiana University, Bloomington

1

Page 2: Hasthi talk at ICWS 2009

Outline

Motivation & the Problem : Managing

Large Scale Systems using User-defined

Management logic

Related Work

Proposed Solution : Hasthi

Scalability Results

Contributions

2

Page 3: Hasthi talk at ICWS 2009

Motivation: Large Scale systems

• IT is becoming a part of our everyday life• Increases size of potential user bases of systems

(Google, Facebook, Amazon …). • Information Avalanche. • National, Global scale data collection• Success in this setting is decided by our ability to make

sense of this data – scale matters (Google!).

• Technological advances• Connectivity , SOA, Complex systems possible.• Computing power everywhere (multicore, smart phones). • Cloud - Lower the barrier for scale.

We have the need and means to build large scale systems

3

Page 4: Hasthi talk at ICWS 2009

Building them is Feasible, but Keeping them Running ??

Changes are a norm rather than an exception – “10,000 servers, each having MTTF of thousand days => 10 failures/day” [Jeff Dean].

High Operational Cost - When a system scales up, complexity increases.◦ More than 75% TCO (Total Cost of Ownership) based on Patterson et al.

data. (Dominated by salaries.) ◦ 50% IT budget spent on recovering from failures [Ganek et al.]

Unreliable Middleware - Grid reliability among all operations 55%-80% [Khalili et al.]. Then the success rate of a service or a workflow that has 6 grid operations is 0.26 !!!

Efforts to avoid failures have been unsuccessful – “Not a problem to be solved, but a fact to cope with” [Patterson]

4

Page 5: Hasthi talk at ICWS 2009

5System Management As a Potential Solution + Role of User Defined

Management Logic

Page 6: Hasthi talk at ICWS 2009

The Problem Large scale systems need many managers

◦ One manager does not scale nor robust

Each manager has a Partial view of the system ◦ a subset of resources are assigned to each manager

But a Global view is Preferred (ease of authoring logic) ◦ Logic that work on local data need emergent properties, and

hard for user to author them. ◦ We all think in terms of global properties,

Example : “If the system does not have 5 message brokers, create new brokers and connect them to the broker network.” : detect <5 brokers, find the best place to create new one, create new one, and connect it to existing brokers.

Problem: Enforcing user-defined management logic (that depends on a global view) on large-scale

systems? And Application of such a framework to manage systems.

6

Page 7: Hasthi talk at ICWS 2009

Approach Scalable

Robust Ease of Writing

management logic

Problems

Decentralized control (e.g. DMonA , and Deugo et al .)

Highly Yes Hard Hard for users to write rules to achieve

emergent behavior

Complex Event processing

(DREAM)

Yes Possible

Not Easy Event model has limited Memory

Consistent view across managers (e.g. Georgiadis

et al. )

No Yes Yes Need ordered reliable multicast – does not

scale

Hierarchical control with aggregation (Monalisa)

Highly Possible

Not Easy Lose identity of a single resource due to

aggregation

Hierarchy with Policies at each

level (e. g. WildCat)

Yes Possible

Possible Policies are not as explicit as rules.

State Machine (Dubey et al. )

Yes Possible

Not Easy Users have to construct this state machine, which is

hard

7

Page 8: Hasthi talk at ICWS 2009

Our SolutionHasthi System Management Framework as a

SolutionA Dynamic, Robust System Management

FrameworkWe showed (details in next slides) that it can

scale while enforcing user defined management logic that depend on Global assertions.

8

Page 9: Hasthi talk at ICWS 2009

9

Big Picture (Hasthi)

Hasthi Has three Parts Manager Cloud – distributed architecture that binds managers

and resources in the system as one cohesive unit. Meta-Model that represents the system state. Decision Framework.

Page 10: Hasthi talk at ICWS 2009

Manager Cloud 10

Managers form a P2P network (Pastry), which is used for Initialization and Recovery (Elections).

Normal Operations use SOAP over HTTP

Page 11: Hasthi talk at ICWS 2009

Meta-Model

Meta-model represents the monitoring data collected from the system. Summarized meta-model provides a global view.

Delta-consistency – changes are reflected within a bounded time (a concept borrowed from shared memory multiprocessors [see Singla et al.]).

11

Page 12: Hasthi talk at ICWS 2009

Decision Framework

Users define management logic as rules: Local and Global. Manager control loops evaluate partial meta-models using local rules. The coordinator control loop evaluates the summarized meta-models

using global rules (Global view). Actions triggered by rules analyze meta-model and decide on solutions.

12

Page 13: Hasthi talk at ICWS 2009

Management Rules Rules (Drools) evaluate meta-objects (which represent resources) and execute

actions, which analyze meta-objects and decide on solutions.

rule "RestartFailedServices"when service:ManagedService(state == "CrashedState"); host:Host(state != "CrashedState", service.host == name);then system.invoke(new RestartAction(service), new ActionCallback() { public void actionSucessful(ManagementAction action) { ..... } public void actionFailed(ManagementAction action,Throwable e) { service.setState("UnRepairableState"); system.invoke( new UserInteractionAction(system, service, action,e)); }});end

13

When the condition given using the object query language is met, actions in the then-clause are carried out.

Use Rete algorithm to evaluate meta-objects and execute corrective actions. Tradeoff between space and time.

Page 14: Hasthi talk at ICWS 2009

Management Actions Action Types

1. Create a New service

2. Restart a running service or recover a failed service

3. Relocate a service

4. Tune and configure a resource – change the configuration of a resource or change the structure of the system.

5. User Interaction Action

Actions implementation: ◦ Use shell scripts (e.g. service start or stop) and execute

them using a Host Agent running in each host. ◦ Use Hasthi Agent integrated with each resource.

Hasthi provides default management actions, but

users can write their own.

14

Page 15: Hasthi talk at ICWS 2009

Scalability: Test Setup

Main Test Setup Large scale deployment of LEAD. Multiple replicas of the complete LEAD

stack. Each service simulates a management

workload using a randomized algorithm.

Set of rules to manage the system, and each test ran for a 1 hour with 30 seconds epoch time.

15

Coordinator Test Setup: Test-Manager that simulates all

messages generated by a normal manager managing a set of resources.

We simulated a large-scale system using Test-Managers.

The coordinator does not see a difference.

Q?

Page 16: Hasthi talk at ICWS 2009

16

Measurements (Metrics)

Page 17: Hasthi talk at ICWS 2009

One Manager Overhead (Resource Heartbeat Latency, Manager Loop Overhead, Manager Heartbeat Latency)

Managers Overhead (Coordinator Loop, Manager Heartbeat )

One manager scales to 5000-8000 resources, Hasthi scales more with added managers. Need more tests to find the limits.

17

Page 18: Hasthi talk at ICWS 2009

Scalability: Test Setup

Main Test Setup Large scale deployment of LEAD. Multiple replicas of the complete LEAD

stack. Each service simulates a management

workload using a randomized algorithm.

Set of rules to manage the system, and each test ran for a 1 hour with 30 seconds epoch time.

18

Coordinator Test Setup: Test-Manager that simulates all

messages generated by a normal manager managing a set of resources.

We simulated a large-scale system using Test-Managers.

The coordinator does not see a difference.

Q?

Page 19: Hasthi talk at ICWS 2009

Coordinator Limit: (Manager Heartbeat Latency, Coordinator Loop Overhead) vs. Resource count

Close to a Linear overhead, the coordinator scales to 100,000 resources and 1000 managers, and the number of managers does not make a much difference.

Why? (1) Summarization, (2) Only transfer Changes, (3) Rete Algorithm, which only evaluates changes (tradeoff between speed vs. memory).

19

Page 20: Hasthi talk at ICWS 2009

Manager Independence: (Resource heartbeat, Manager Loop vs. Manager Heartbeat) vs. resources per Manager

We measured the limit of a manager and the limit of the coordinator. Hypothesis: a manager overhead only depends on resources assigned

to a manager, not on other managers or resources in the system we can scale up Hasthi (e.g. 100 managers, 1000 resources each).

Verify Hypothesis: A Scatter Plot: overhead vs. number of resources per Manager. Same X values are reasonably close to each other. Hypothesis is valid till 2000 resources at least.

Why? Managers do not usually interact with other managers, but talk with the coordinator.

20

Page 21: Hasthi talk at ICWS 2009

Scalability: Summary

1. One manager scales to 5000-8000 resources.

2. Managers only depend on resources assigned to them (at least till 2000 resources) and are not affected by other Managers in the system.

3. Coordinator scales to 100,000 resources and 1000 managers (100-1000 resources per manager < 2000 limit in #2).

System scales to 100,000 resources.

21

Q?

Page 22: Hasthi talk at ICWS 2009

Sensitivity: Rules

To find sensitivity to rules, 7 Rules sets, each having more rules then the one before, with 40,000 resources

Almost linear Overhead, seem to be stable. We also verified by running 100,000 resources against the most complex rule set.

22

Page 23: Hasthi talk at ICWS 2009

Sensitivity: Epoch Time

Epoch times are time periods between heartbeats and control loop evaluations etc, and they decide how fast Hasthi reacts to failures.

Why overhead reduce with smaller epoch? Rete algorithm remembers old results and only evaluates new results. Small epoch means less changes, which means less overhead!!

23

Page 24: Hasthi talk at ICWS 2009

Sensitivity: Workload

Increase failures in the system (increase workload on Hasthi) and measure with 40,000 resources.

Hasthi is stable, why? Hasthi uses a job queue to execute actions asynchronously. Therefore, can withstand higher workloads and surges.

24

Page 25: Hasthi talk at ICWS 2009

Contributions

Problem: Enforcing user-defined management logic (that depend on a global view of the managed system) on large-scale systems?

We proposed an architecture to solve this problem and demonstrated that despite its dependency on a global view, a Management Framework can scale to manage most real world usecases

25

Page 26: Hasthi talk at ICWS 2009

Questions26