Word Wide Cache Distributed Caching for the Distributed Enterprise.

Click here to load reader

  • date post

    26-Dec-2015
  • Category

    Documents

  • view

    216
  • download

    1

Embed Size (px)

Transcript of Word Wide Cache Distributed Caching for the Distributed Enterprise.

  • Slide 1
  • Word Wide Cache Distributed Caching for the Distributed Enterprise
  • Slide 2
  • Agenda Introduction to distributed caching Scenario for using caching Caching for the Virtual Organization
  • Slide 3
  • The market need The major initiatives are building up our low- latency infrastructure, moving toward a service- oriented architecture (SOA) and leveraging grid computing. Sharon Reed (CTO for global markets trading technology Merrill Lynch)
  • Slide 4
  • What is Distributed Caching An in-memory data store that can be shared between distributed applications in a real- time fashion.
  • Slide 5
  • Background Memory capacity increased dramatically in recent years. The NetEffect open the opportunity to create a virtual memory Grid. Many applications are seeking for ways to utilize the available memory resources for performance boosting. The Need: Managing a memory resource in a reliable, transactional manner is extremely complex. Applications are seeking for a generic infrastructure for utilizing memory resource to reduce data access overhead in a distributed environment. Whats New
  • Slide 6
  • Why using Distributed Caching Scalability Reduce centralized data bottleneck Enable scaling of Performance Reduce I/O overhead bringing data closer to the application using it. Provide in-memory speed. Reliability Use the cache as a reliable data store. Real Time content distribution Used for integration and synchronization purposes
  • Slide 7
  • Before: Reliability Centralized DB Session info User Application RDBMS (JDBC, JDO) Application Limitations: Performance Scalability
  • Slide 8
  • Session info User Application Benefits: Performance Scalability After: Reliability Dist. Caching
  • Slide 9
  • Before: Scalability with Centralized DB Telephone Operating Service Telephone Operating Service Peek load of users Application RDBMS (JDBC, JDO) Load Balancer Bottleneck
  • Slide 10
  • Telephone Operating Service Telephone Operating Service Application Peek load of users Load Balancer After: Scalability with Distributed Cache
  • Slide 11
  • Distributed Caching Topologies Partitioned Cache Replicated Cache Master / Local Cache
  • Slide 12
  • Content distribution Content based routing (No need for static queues) Passing content and functionality Dynamic Orchestration (without changing the application) RMIJNI JCA MDB Distributed Caching Network Publisher Content Based Routing.Net, C++ Java Fat Client J2EE Subscribers
  • Slide 13
  • SBA Virtual Middleware Same data can be viewed through different interfaces! A single runtime for maintaining scalability, redundancy across all systems Reduces both the maintenance overhead and development complexity Provides Grid capabilities to EXISTING applications Virtual Table JDBC Clustered Space JMS Virtual Topic/ Queue Space JCache Applications Single virtualization technology for caching and messaging Virtual Middleware
  • Slide 14
  • GigaSpaces EAG Caching Edition Distributed Shared Memory (JavaSpaces) Common Clustering Architecture Distributed Shared Memory (JavaSpaces) Common Clustering Architecture Parallel ProcessingMessaging Bus Distributed Caching Middleware Virtualization On demand computing resources with optimization for commodity server setup Service Grid From JavaSpaces (2001-2003) Grid Application Server & Distributed Caching (2005) Caching Edition
  • Slide 15
  • Case - Study 1. Distributed Session Sharing 2. A geographically distributed trading application case study
  • Slide 16
  • Session Sharing between multiple Mobile applications Simple Example: Distributed Caching Session Sharing Fail Over Replication Load Balancing
  • Slide 17
  • Background Trading Applications Trading clients allow "traders" to monitor the market and submit trades. Read/write ratio is extremely high Events have to be delivered in as close to real-time as possible. Traditional approaches used mostly messaging (IIOP, JMS, Sockets) to implement such system.
  • Slide 18
  • Caching for the Virtual Enterprise London Tokyo Market View Quote Management Hit manager Credit manager Session Manager Maintain Local Cache of the market view Maintain Session Object and profile through leasing. Use master worker patter to execute logic on the server session. NY Order Book Application Replicated Cache with Partitioned ownership
  • Slide 19
  • Challenges: Bandwidth NY London Solution: Batching Compression Async replication Data is kept local Update are local based on ownership 10Mbs Replication
  • Slide 20
  • Challenges: Reliability Backup Primary Backup NY London Sync Replciation Within site ASync Replication between sites 10Mbs
  • Slide 21
  • Scaling through Partitioning NY2 NY1London1 London 2 NY London Load Balancing Partition the data within site ASync Replication between sites per partition WAN
  • Slide 22
  • Challenges: Sync with Ex Db NY London DB load update Use the replication channel to perform reliable async replication to external data base (Sybase) Load data to the cache from the external data source in case it is not in the cache loadupdate
  • Slide 23
  • Challenges: Data Distribution Backup Primary Backup NY London Event driven on trade updates Aggregation of events from all sites Supports unicast / Multicast Server side filtering 10Mbs
  • Slide 24
  • Challenges: Distributed Query Provide SQL and Id based query Partition data based on content Distribute query based on ownership Order Book Application Partitioned Cache Select xx from..
  • Slide 25
  • Challenges: Security SSO (Single Sign On) Provides authentication and authorization Authorization can be based on content and operation Replication filters enable filtering of data between sites based on content. Designed with minimal performance in mind Order Book Application Partitioned Cache
  • Slide 26
  • Summary Distributed caching solves performance, scalability, and reliability of distributed applications. It is a major piece in any grid deployment.