Ensuring Business Continuity with the IBM Fit For Purpose ... - Ensuring...آ  Business Continuity...

download Ensuring Business Continuity with the IBM Fit For Purpose ... - Ensuring...آ  Business Continuity customer

of 33

  • date post

    20-May-2020
  • Category

    Documents

  • view

    0
  • download

    0

Embed Size (px)

Transcript of Ensuring Business Continuity with the IBM Fit For Purpose ... - Ensuring...آ  Business Continuity...

  • © 2013 IBM Corporation

    Ensuring Business Continuity with the IBM Fit For Purpose Methodology Yann Kindelberger Senior Architect, Design Center IBM Client Center Montpellier yannkindelberger@fr.ibm.com

    GUIDE SHARE EUROPE

  • © 2013 IBM Corporation2

    GUIDE SHARE EUROPE - Region Belgium-Luxembourg - 32nd Regional Conference

    Agenda

    Workload and platform positioning

    The Key Fit for Purpose principles

    The Non Functional Requirements: Business Continuity

    Customer example

    IBM PSSC F4P & HACoC workshops offering

  • © 2013 IBM Corporation3

    GUIDE SHARE EUROPE - Region Belgium-Luxembourg - 32nd Regional Conference

    The Fit for Purpose approach help define the right platform matching the client workload requirements

    � Fit for Purpose is a client centric thought process that provides rational platform choices, which are in line with client workloads requirements and local conditions.

    � Multiple areas to take into account: – Local Factors – Non Functional Requirements – Integration requirement – Workload characteristics – Overall costs

    System z

    System x

    Power

    Throughput HA/DR/Cont.Op.

    Security

    Data Integration

    I/O intense

    CPU-boundMemory usage

    TCO

    Skills

    IT Strategy/ Technology Adoption

    Env. constraints

    Scale

    Appl. Integration

  • © 2013 IBM Corporation4

    GUIDE SHARE EUROPE - Region Belgium-Luxembourg - 32nd Regional Conference

    4

    � Though 2-year cycles between nodes will continue, beyond 22 nm new technology nodes, but the historic 35%-per- node increase & doubling of transistor density may drop to 15-20%. Power density is limiting ability to fully translate the transistor performance gain into systems.

    Transistor performance scaling to continue, but at a slower rate

    Power is limiting practical performance

    Single thread performance is slowing dramatically

    A B C

    R el

    at iv

    e Technology dominant trends show that single core operations are limited

  • © 2013 IBM Corporation5

    GUIDE SHARE EUROPE - Region Belgium-Luxembourg - 32nd Regional Conference

    But there are also factors affecting workload scaling in multi-core operations

    Cache misses and I/O delays detract from throughput but do not reduce CPU busy time

    Queuing or contention for shared resources such as memory, network, I/O, busses, locks, software

    structures

    Cache to cache traffic caused by data sharing or da ta replication

    CPU Utilization

    T hr

    ou gh

    pu t

    Saturation

    Queuing and Contention Cache and Buffer Coherence

    CPU, I/O and Memory Bandwidth

    CPU clock speed and code path length; memory, cache, and I/O size, ports and speed

    Cache

    Core Core Core

    Cache

    Core Cache

    Core

    Cache

    Core

    Cache

    Core

    Cache

    Core

    Memory

    Memory Core Cache I/O

  • © 2013 IBM Corporation6

    GUIDE SHARE EUROPE - Region Belgium-Luxembourg - 32nd Regional Conference

    Research showed that server design could not cope with all the limiting factors: Tradeoffs are mandatory in server design

    Thread Speed

    Thread Count

    High

    Low

    High

    Low

    Low

    High

    Server Server DesignDesign

    Effective Cache/ Thread

    Fitness Proxies:

    •Thread Speed – Serial Fitness •Thread Count – Parallel Fitness •Cache/Thread – Data Fitness

  • © 2013 IBM Corporation7

    GUIDE SHARE EUROPE - Region Belgium-Luxembourg - 32nd Regional Conference

    Temple’s Assertion

    Fitness for Data Centric Loads

    F itn

    es s

    fo r

    S er

    ia l L

    oa ds

    Blades (x, POWER) iDataplex BlueGene

    Highly Scaled NUMA

    Closely Coupled Clusters

    Mainframes

    Power 755 and x3950

    X3850 and Midrange Power

    p795 High-end Power p780

    p770 p750

    Parallel Purgatory

    Parallel Hell

    Parallel Nirvana

    IBMer J.Temple asserts that IBM Servers are optimized for different parts of Pfister’s space and proposed this positioning:

  • © 2013 IBM Corporation8

    GUIDE SHARE EUROPE - Region Belgium-Luxembourg - 32nd Regional Conference

    These workload segments map to technical characteristics

    Shared Data & Work Q – Type 1 • Scales up • Updates to shared data

    and work queues • Complex virtualization • Business Intelligence with

    heavy data sharing

    Parallel Data Structures – Type 3

    Small Discrete – Type 4Highly Threaded – Type 2

    • Scales well on clusters • XML parsing • Buisness intelligence with

    little data sharing • HPC applications

    • Scales well on large SMP • Web application servers • Single instance of an

    ERP system • Some partitioned

    databases

    • Limited scaling needs • HTTP servers • File and print • FTP servers • Small end user apps

    Transaction Processing and Database Analytics and High Performance

    Business Applications Web, Collaboration and Infrastructure

  • © 2013 IBM Corporation9

    GUIDE SHARE EUROPE - Region Belgium-Luxembourg - 32nd Regional Conference

    Workloads positioning versus Pfister’s Paradigm

    H ow

    m uch sharing is there?

    S ynchronization T

    raffic C

    ontention and C oherence D

    elays

    How much data do we we have to deal with? Bulk Data Traffic – Saturation Delay

    Type 1 Shared data and work queues

    Type 3 Parallel data structures

    Type 4 Small discreet applications

    Type 2 Highly threaded

    applications

    Parallel PurgatoryParallel Nirvana

    Parallel Hell

  • © 2013 IBM Corporation10

    GUIDE SHARE EUROPE - Region Belgium-Luxembourg - 32nd Regional Conference

    Combining workloads and servers in Pfister’s diagrams allows to match workload types to machines

    One size does not fit all Pfister’s Paradigm and “Temple’s Assertion”

    Type 1 Shared data and work queues

    Type 3 Parallel data structures

    Type 4 Small discreet applications

    Type 2 Highly threaded

    applications

    Parallel PurgatoryParallel Nirvana

    Parallel Hell

    H ow

    m uch sharing is there?

    S ynchronization T

    raffic C

    ontention and C oherence D

    elays How much data do we we have to deal with?

    Bulk Data Traffic – Saturation Delay How much data do we we have to deal with?

    Bulk Data Traffic – Saturation Delay

    Temple’s Assertion

    Blades (x, POWER) iDataplex BlueGene

    Highly Scaled NUMA

    Closely Coupled Clusters

    Mainframes

    Power 755 and x3950

    X3850 and Midrange Power

    p795 High-end Power p780

    p770 p750

    Parallel Purgatory

    Parallel Hell

    Parallel Nirvana

  • © 2013 IBM Corporation11

    GUIDE SHARE EUROPE - Region Belgium-Luxembourg - 32nd Regional Conference

    The Fit for Purpose approach help define the right platform matching the client workload requirements

    � Fit for Purpose is a client centric thought process that provides rational platform choices, which are in line with client workloads requirements and local conditions.

    � Multiple areas to take into account: – Local Factors – Non Functional Requirements – Integration requirement – Workload characteristics – Overall costs

    System z

    System x

    Power

    Throughput HA/DR/Cont.Op.

    Security

    Data Integration

    I/O intense

    CPU-boundMemory usage

    TCO

    Skills

    IT Strategy/ Technology Adoption

    Env. constraints

    Scale

    Appl. Integration

  • © 2013 IBM Corporation12

    GUIDE SHARE EUROPE - Region Belgium-Luxembourg - 32nd Regional Conference

    Industry standard definitions for Business Continuity

    IT Resiliency Continuous

    Business operations

    Continuous Availability Mask outages

    HA

    Mask unplanned outages

    HA: High Availability CO

    Mask planned outages

    CO: Continuous Operations

    DR

    Disaster Recovery

  • © 2013 IBM Corporation13

    GUIDE SHARE EUROPE - Region Belgium-Luxembourg - 32nd Regional Conference

    What is the difference between CA and DR?

    � Continuous Availability (CA): – When one component of the IT infrastructure (HW or SW) fails (unplanned) or stops

    (planned), the service to the users is not impacted , or impacted in a very limited scope (only the in-flight transactions � they will have to be rolled-back). • Examples of CA features:

    – For systems: Parallel Sysplex, Partition Mobility, PowerHA

    – For data: HyperSwap, LVM mirroring, Metro Mirror (synchronous data replication), Oracle RAC

    – Worst case will lead to a restart (not a recovery). This can be done within minutes. – If there is an outage wider than IT components, this has nothing to do with CA...

    � Disaster Recovery (DR): – When there is a wide outage impacting several or all components of a location (can be more

    than IT infrastructure), then it’s called a disaster .