OSMOSIS Final Presentation. Introduction Osmosis System Scalable, distributed system. Many-to-many...

36
OSMOSIS Final Presentation OSMOSIS Final Presentation
  • date post

    20-Dec-2015
  • Category

    Documents

  • view

    242
  • download

    1

Transcript of OSMOSIS Final Presentation. Introduction Osmosis System Scalable, distributed system. Many-to-many...

OSMOSIS Final PresentationOSMOSIS Final Presentation

IntroductionIntroduction

Osmosis System

• Scalable, distributed system.

• Many-to-many publisher-subscriber real time sensor data streams, with QoS constrained routing.

• Ability to perform distributed processing on stream data.

• Processing threads can migrate between hosts.

IntroductionIntroduction

Osmosis System (cont.)

• Distributed Resource Management.

• Maintain balanced load.

• Maximize number of QoS constraints met.

• Cross platform Implementation.

MotivationMotivation

Possible Systems

• A distributed video delivery system. Multiple subscribers with different bandwidth requirements. Stream compressed within the pastry network en-route, for lower bandwidth subscribers.

• Car traffic management system. Cameras at each traffic light, connected in a large distributed network. Different systems can subscribe to different streams to determine traffic in specific areas, which would allow for re-routing of traffic, statistics gathering, etc...

MotivationMotivation

Possible Systems

• A generalized SETI-at-home type distributed system.

Clients can join and leave the Osmosis network. Once part of the network, they can receive content and participate in processing of jobs within the system.

Related WorkRelated Work

• Jessica ProjectDistributed system with thread migration, but uses

centralized server for load balancing, which limits scalability.

• End System MulticastEnd systems implement all multicast related functionality

including membership management and packet replication. Builds mesh of all nodes in network to build tree topology, not scalable.

• Pastry/Scribe Application level multicast and anycast on a generic,

scalable, self organizing substrate for peer-to-peer applications. Extremely scalable, but no attention to QoS.

Related WorkRelated Work

• Osmosis Goal:Find a middle ground between the optimal, yet non-

scalable, performance of End System Multicast, and the scalable, yet sub-optimal, performance of Pastry/Scribe.

Network of NodesNetwork of Nodes

1 6 8 .1 2 2 .1 8 8 .1 4 3

1 6 8 .1 2 2 .0 .8

1 2 2 .1 8 8 .1 3 4 .5

2 4 .1 2 7 .5 0 .4 4

1 2 8 .1 2 2 .1 9 3 .4

1 6 8 .1 2 2 .1 8 8 .1 0 1

2 4 .1 2 7 .5 0 .3 0

1 2 8 .0 .0 .4

Measure ResourcesMeasure Resources

1 6 8 .1 2 2 .1 8 8 .1 4 3C P U 1 5 % , 2 0 0 k p s 1 6 8 .1 2 2 .0 .8

C P U 3 0 % , 4 0 k p s

1 2 2 .1 8 8 .1 3 4 .5C P U 0 % , 7 0 0 k p s

2 4 .1 2 7 .5 0 .4 4C P U 4 5 % , 2 0 k p s

1 2 8 .1 2 2 .1 9 3 .4C P U 1 0 % , 2 0 0 k p s

1 6 8 .1 2 2 .1 8 8 .1 0 1C P U 9 5 % , 4 0 0 k p s

2 4 .1 2 7 .5 0 .3 0C P U 1 0 % , 5 k p s

1 2 8 .0 .0 .4C P U 8 0 % , 3 0 0 k p s

1 6 8 .1 2 2 .1 8 8 .1 4 3C P U 1 5 % 1 6 8 .1 2 2 .0 .8

C P U 3 0 %

1 2 2 .1 8 8 .1 3 4 .5C P U 0 %

2 4 .1 2 7 .5 0 .4 4C P U 4 5 %

1 2 8 .1 2 2 .1 9 3 .4C P U 1 0 %

1 6 8 .1 2 2 .1 8 8 .1 0 1C P U 9 5 %

2 4 .1 2 7 .5 0 .3 0C P U 1 0 %

1 2 8 .0 .0 .4C P U 8 0 %

Measure ResourcesMeasure Resources

1 6 8 .1 2 2 .1 8 8 .1 4 3C P U 1 5 % 1 6 8 .1 2 2 .0 .8

C P U 3 0 %

1 2 2 .1 8 8 .1 3 4 .5C P U 0 %

2 4 .1 2 7 .5 0 .4 4C P U 4 5 %

1 2 8 .1 2 2 .1 9 3 .4C P U 1 0 %

1 6 8 .1 2 2 .1 8 8 .1 0 1C P U 9 5 %

2 4 .1 2 7 .5 0 .3 0C P U 1 0 %

1 2 8 .0 .0 .4C P U 8 0 %

Build OverlaysBuild Overlays

1 6 8 .1 2 2 .1 8 8 .1 4 3C P U 1 5 % 1 6 8 .1 2 2 .0 .8

C P U 3 0 %

1 2 2 .1 8 8 .1 3 4 .5C P U 0 %

2 4 .1 2 7 .5 0 .4 4C P U 4 5 %

1 2 8 .1 2 2 .1 9 3 .4C P U 1 0 %

1 6 8 .1 2 2 .1 8 8 .1 0 1C P U 9 5 %

2 4 .1 2 7 .5 0 .3 0C P U 1 0 %

1 2 8 .0 .0 .4C P U 8 0 %

Construct StreamsConstruct Streams

Pro du ce rB a c k T o T h e F ut ur e

C o n s u m e rB a c k T o T h e F ut ur e

Pro ce s s o rB a c k T o T h e F ut ur e

Pa s s - th ro u g hB a c k T o T h e F ut ur e

C o n s u m e rB a c k T o T h e F ut ur e

System OverviewSystem Overview

Transport

ResourceManagement

Thread Migration

Network & CPUUtilization

Where & WhenTo Migrate

RoutingInformation

Network Utilization

MigrationPolicy

Thread Migration

• Provides a means of transporting a thread from one machine to another.

• It has no knowledge of either the current resource state or overlay network.

System OverviewSystem Overview

Resource Management

• API to provide network and utilization information.

• Used by Transport to create and maintain logical overlay.

• Used by thread Migration Policy to decide when and where to migrate.

System OverviewSystem Overview

Transport

• Creates overlay network based on resource management information.

• Provides communications infrastructure.

• Provides API to Migration Policy allowing access to routing table information.

System OverviewSystem Overview

Migration Policy

• Decides when and where to migrate threads based on pluggable policy.

• Leverages resource metrics and routing table of logical overlay in decision making.

• Call thread migration API when signaling that it is time to migrate, sends destination address of node to migrate to.

System OverviewSystem Overview

Resource MonitoringResource Monitoring

In order to provide basic tools for scalability and QoS constrained routing, it is necessary to monitor system resource availability.

• Measurements

• Network Characteristics (Bandwidth/Latency)

• CPU Characteristics (Utilization/Queue Length)

Resource MonitoringResource Monitoring

Bandwidth Measurement

• When stream exists between hosts, passive measurement is performed.

• Otherwise, active measurements carried out using packet train technique.

• Averaging function can be defined by user.

Implementation

• Using pcap library in Linux and Windows.

Resource MonitoringResource Monitoring

CPU Measures

• Statistics collected at user defined intervals.

Implementation

• LinuxKernel Level: Module collects data every jiffyUser Level: Reads loadavg & uptime /proc files.

• WindowsBuilt in performance counters.

Resource MonitoringResource Monitoring

Evaluation of techniques

• System/Network wide overhead of running measurement code.

• How different levels of system/network load affect measurement techniques.

Resource MonitoringResource Monitoring

Evaluation of work

• Linux functionality implemented

• CPU measures evaluated

In progress

• Bandwidth measurement evaluation

• Windows implementation

Transport Overview

•Distributed, scalable, and widely deployable routing infrastructure.

• Create a logical space correlated with the Create a logical space correlated with the physical spacephysical space• Distributed routing table construction and Distributed routing table construction and maintenancemaintenance..

•Multicast transmission of data with the ability to meet QoS.

Routing Infrastructure

Logical SpaceLogical Space

• Assume IP addresses provide Assume IP addresses provide approximation of physical topologyapproximation of physical topology

• 1:1 mapping of logical to physical1:1 mapping of logical to physical

Routing TablesRouting Tables

• Maximum size of Maximum size of 11K entriesK entries• Obtained incrementally during joiningObtained incrementally during joining• Progressively closer routing ala PastryProgressively closer routing ala Pastry

Multicast Tree Growing

• QoS considered during join/build phaseQoS considered during join/build phase• Localized, secondary rendezvous pointsLocalized, secondary rendezvous points• Next-hop session information maintained by Next-hop session information maintained by all nodes in multicast treeall nodes in multicast tree

Multicast Group Organizational Diagram

RP(G)

P1

SP1

P2

SP2

P3

SP3

Transport Evaluation

PlannedPlanned• Test the network stress and QoS of our Test the network stress and QoS of our system compared to IP Multicast, Pastry, and system compared to IP Multicast, Pastry, and End-System Multicast.End-System Multicast.

Transport Future Work

• User and kernel space implementations.User and kernel space implementations.• Integrate XTP to utilize the systemIntegrate XTP to utilize the system

Thread MigrationThread Migration

C lie n t

D o wnn o b u ffer

Upn o b u f f er

S e rv e r

Pro ce s s o r< -- b u ffe r -->

Migration OverviewMigration Overview

Both user and kernel level implementations:

• Change node state, associated API

(pass-through, processing, corked and uncorked).

• Migrate nodes while maintaining stream integrity.

Kernel/C : Less protection domain switches, less

copies, kernel threading, and scalability. Faster.

User/Java: Can run on any Java platform. Friendlier.

Migration AccomplishmentsMigration Accomplishments

Kernel:

• IOCTL /dev interface.

• Different State design and code.

• Streaming handled by kernel threads in

the keventd process.

• Test and API interface.

Migration AccomplishmentsMigration Accomplishments

Java:

• Command line OR socket-based API.

• Dynamic binding on processor object, which must be derived from a provided abstract class.

• Works with any socket producer/consumer pair.

Migration IntegrationMigration Integration

Kernel:

• Non OSMOSIS specific C/C++ API.

• Socket-based API.

Java:

• Java command line API.

• Provides abstract classes for processors.

• Socket-based API.

Migration EvaluationMigration Evaluation

Comparison with:

• Standardized methods for data pass through.

• Existing non-real-time streaming systems.

• Existing thread migration systems.

Comparison and integration between the Java and Kernel Loadable Module implementations.

Migration Future WorkMigration Future Work

Kernel:

• Implement zero-copy for the processing state.

• Heterogeneous thread migration.

Java:

• Increased performance.

Both:

• Support for alternate network protocols.

• Testing and evaluation.

ConclusionsConclusions

The systems and algorithms developed are significant initial steps toward a final OSMOSIS system.

The have been designed to be modular and easily integrated together.

The research and mechanisms developed during this project are not bound to the OSMOSIS system.