[En] Orange Open Grid presentation
-
Upload
orange -
Category
Technology
-
view
2.812 -
download
0
description
Transcript of [En] Orange Open Grid presentation
Journey to the Open Grid
Soumik Sinharoy – Orange Silicon Valley October 25, 2011, IBM Information on Demand – Las Vegas
world-leading communications brand owned by France Telecom
181 000 employees and 50.9 bn € revenues in 2010
Over 2000 distribution outlets in Europe
a leading retailer of premium content & games with ~ 20 million downloads per year in Europe
50 Orange application ~ 9 million downloads from apps stores
plus ...
Orange is ...
serving over 216 million customers in 32 countries over 5 continents - addl100m customers from Asia and Africa by 2015
Spain
United Kingdom
Jordan
Egypt
Mauritius
Morocco
Western Sahara
MaliNiger
Ivory Coast
Senegal
Guinea Central African Republic
CameroonKenyaUganda
Botswana
Madagascar
Tunisia
RomaniaMoldova
Poland
France
Slovakia
SwitzerlandAustria
Guyana
Armenia
BelgiumLuxembourg
Vanuatu
Dominican Republic
MartiniqueGuadeloupe
Reunion IslandCountries where we provide services for residential customers
Countries where we provide services for business customers
216 million customers worldwide…
our Group provides services for residential customers in 35 countries and for businesses
in 220 countries and territories
3,500 employees working on innovation
PolandWarsaw
ChinaBeijing
JordanAmman
EgyptCairo
SpainMadrid/
Barcelona
8 cities in France
USASan Francisco
United Kingdom London
JapanTokyo
Ivory CoastAbidjanOrange Innovation centers
Orange Silicon Valley: disruptive innovations for the future
Participate in the disruptive innovations
Partnership and business development with companies, startups, and universities
Introduce our business leaders to the latest solutions from Silicon Valley
Co-development with the ecosystem benchmarking and diligence on new technologies in order to frame recommendations on technology and business strategy for France Telecom Group
Executive Briefing -global customer management teams, and policy-makers about trends in the IT and communications business
6
The Journey
Open standards
Open architecture
Vendor neutral – level the playing field for competition
Carbon footprint- Reduce power usage
Improve end user experience
Compact infrastructure footprint
Dense Computing– Compute density– IO density– Storage density
7
Infrastructure snapshot More than 30,000 of servers
(EMEA) More than 50% deployments are 4-
way to 64-way Enterprise storage
– NAS (file sharing)– FC SAN (for most purposes)
Challenges – Business growth demands for
bigger platform sizing– Scalability barrier– Compute density– footprint– Application performance
limitations– Architecture limitations– Interconnect technology
8
SMP => Grid
Migration large databases from SMP to GRID (IBM SystemX Blades)
90% reduction in server TCO
InfiniBand QDR for server interconnect
8G/4G FC SAN for Storage
App Servers
Database as Service Cluster
InfiniBand
9
2008 research study at IBM Montpellier: SystemX blades with Intel Xeon processors
ib0
ib1
Inter Switch Link
Oracle RAC Cluster Apps Servers
Oracle RAC interconnect + IO (SRP)
Sql*net ( IPoIB) + IO (SRP)
IO (SRP)
IO (SRP)
IB Storage
IBM HS21 XM
IBM LSI DS5000
ISR9024M_DDR
ISR9024M_DDR
# of nodesAchieved
TPM Average ms Max IOPs
4 763,038 44 27,880
6 1,162,022 22 36,870
8 1,665,183 33 46,067
10
Open Grid
28 blades in IB QDR cluster
FCoIB -> FC : Low Latency bridging
FC SAN : capacity
Application / Database Server - INFINBAND
IP to Fiber Channel - Gateway
Fiber Channel Storage - FIBER CHANNEL
11
2010 study with IBM Sillicon Valley Labs : Integrate FC SAN with InfiniBand Cluster using DB2 PureScale
12
2010 study: DB2 pureScale benchmark results Scaling Results
0
10,000
20,000
30,000
40,000
50,000
60,000
1 MBR 2 MBR 3 MBR 4 MBR 5 MBR
Members
TP
S
Average TPS
– 10,000 TPS per member (>3x 2008 study)
– 99% linear scalability
– data dips : transaction anomaly
Experiments 1-5 members
0
2,000
4,000
6,000
8,000
10,000
12,000
1 35 69 103 137 171 205 239 273 307 341 375 409 443 477 511Seconds
TP
S
1 MBR
2 MBR
3 MBR
4 MBR
5 MBR
– Standard table contained 2,500,000 rows
– Transaction mix 80% read, 20% write– 40+ connections to each member with
zero think time– Read transaction consists of 10 random
select statements– Write transaction consists of 3 random
update statements
13
Storage server equation
HDD: too many spinning for few servers!
Need too many servers for a single disk
The Inversion PhenomenonThe Inversion Phenomenon
SSD based Video Storage at a 4KM distance
InfiniBand Edge
Switch
InfiniBand Core
Switch
InfiniBand Edge
Switch
QSFP transceiver
QSFP transceiver
4km optical link
SSD
SSD
16 diskless 8-12 core servers
4 KM InfiniBand WAN
2780 DVD quality videos streamed
Only 13Gbps bandwidth used
This is only 60% of pipe capacity
only 2 SSD – ioDrive Duo
Demo
15
Flash Memory Adoption Trends, forecasts
Note:+PCIe 2.5” will vie for SATA and SAS share
IDC View
Worldwide Enterprise SSD Share by Interface, 2010–2015
IDC: 2011
Source IDC
Source IDC
Source Gartner
Source :Flash Memory Summit ‘11
16
OPEN GRID: Scale out with Flash
T0
T1
Server side scale out flash – Tier 0Low cost capacity of existing SAN with HDD
Server side scale out flash – Tier 0Low cost capacity of existing SAN with HDD
Flash
SAN
Capacity
Perf
orm
anc
e
Source : nutanix
17
Scale on Flash - unify fabric
Scale out tier – 0 with PCIe flash
ISV : manage cache ?
PCIe : secondary buffer pool
Unified IO– IO Virtualization– reduce interconnect cost– transparent bridging
Unified IO : Unified IO : H BA H BA H BA H BA H B
A H BA
H BA
18
19
IBM SVL OSV : Experiment Results
Stream Computing:– Processing transactions
“In-Motion”– Fundamental Validations
– Account, Merchant, Limit, Rating(AMT)
– Able to handle “Burst” traffic and cache database operations
OLTP Cluster– Rates measured using
SAN and IBM HighIOPS for logs
– DB2 pureScale avg tpsusing 2 x Data
Members and SAN: 47K TPS
– DB2 pureScale avg tpsusing 2 x Data
Members and FLASH: 84K TPS
– 2X Improvement using FLASH storage for database logs!
– performance threshold – not reached yet : need to add more stream servers
scalability effort continues : results to be published
scalability effort continues : results to be published
LD JCTD Operational Utility Assessment
LD
Enhance situational awareness by enabling global data access
“While the first of about 10 files was still being transferred to the legacy work station, Large Data had all ten
files.”
LD JCTD IOUA Report, Nov 09, JHU/APL
Globally Synchronized, Shared Data and High Resolution Collaboration
LD
LD
LD
LD
Large Data
SHARE GLOBALLYSHARE GLOBALLY
CREATE LOCALLYCREATE LOCALLY
Operational Demonstration Results
Suitability• Demonstrated TRL-7/8/9• Cost effective, open source• Commodity components• Stability/availability on par with operational systems
“A quick overview of the system was all that was required for ease of use”
-- LD User
Operational Impact• GEOINT access & web services for warfighters• UNCLAS US Gov’t and NGO support• Remote access to large, distributed ISR files• Data virtualization & near real-time failover
“Simply put, the system NRL has in place for delivering large, AOI detailed imagery is outstanding and truly a model for the
DoD/IC.” – Sean Wohltman, Google Inc.
Performance Results Summary as a Fraction of Theoretical Maximum Data Transfer Rate
RDMA/IB TCP/IP/EthernetNETWORK
(Bandwidth Efficiency)
Single Stream
Multiple Streams
Single Stream
Multiple Streams
Transition Metric Threshold/Objectiv
e75% / 90% 80% /
90%75% / 90% 80% / 90%
Wide Area Network
(2,000 fiber miles)
94% ++ 98% ++ 86% + 17%
Long-Haul WAN(13,000 fiber
miles)89% + 83% + 42% 22%
FILE SYSTEM(Read/Write) Single File
Multiple Files Single File
Multiple Files
Transition Metric Threshold/Objectiv
e60% / 80% 60% /
80%60% / 80% 60% / 80%
Long-Haul WAN(13,000 fiber
miles)72% ++ 94% ++ 34% 6%
Meets Threshold + / Meets Objective ++
Rampant Lion on wwwhttp://mapserver.cmf.nrl.navy.mil/
Effectiveness: LD met or exceeded Transition Thresholds
RDMA transport : Global Range ?
RDMA transport : Long Haul ?
developing the networks of the future
the challenges ahead respond to the massive increase in traffic: by 2015, we expect the
standard level of mobile data traffic to have risen by 26 times expand coverage and increase our speed to keep pace with the digital
revolution constantly improve the quality of our services renew our infrastructures while respecting the environment
40G InfiniBand over 370 KM : ESNet ANI Tested
Orange Silicon Valley qualify world’s first long haul 40G InfiniBand solution
Collaboration with US Department of Energy
Field trial on DoE 370KM circuit
Maximum bandwidthUnidirectional 3.8 GB/s,Bidirectional 7.6 GB/s
90%+ transport efficiency under concurrent streaming
25
Storage Network Backbone : CONVERGENCE PHENOMENON
Storage Network Backbone : CONVERGENCE PHENOMENON
Open Grid:Cross boundaries : global scale
Dissemination of information throughout the enterprise
Synchronize distributed enterprise data centers
Ultra Low Latency Fabric across long haul
Grid
Storage
Server
WAN
Storage
Server
Storage
Server
DataCenter
DataCenter
DataCenter
Grid Grid
Seattle Nov - 2011
SEA
SLC
CHI
continental US
NYCMANLAN at AoA
BNL
SSD
OC 768Streaming (?)Video streams
World’s first 40Gbps Long Haul InfiniBand !!World’s first 40Gbps Long Haul InfiniBand !!
6,000 miles !
27
Thank You