PRIMERGY BX600 Management Blade Hardware Guide - Fujitsu Global
Fujitsu PRIMERGY Servers "Next Generation HPC and Cloud ...
-
Upload
datacenters -
Category
Technology
-
view
500 -
download
2
Transcript of Fujitsu PRIMERGY Servers "Next Generation HPC and Cloud ...
Copyright 2009 Fujitsu America, Inc.1
Fujitsu Fujitsu PRIMERGY ServersPRIMERGY Servers
““Next Generation HPC and Cloud Architecture”Next Generation HPC and Cloud Architecture”
PRIMERGY CX1000PRIMERGY CX1000Tom DonnellyTom Donnelly
April 2010April 2010
1
Copyright 2009 Fujitsu America, Inc.
Factors driving IT growth and demand
The Economic Downturn accelerates demand fornew technologies and concepts as IT budgets are cut
Datacenter
Datacenter
IndustrializationHR Strategy "War for Talent"
Cloud ComputingVirtualization
22
Copyright 2009 Fujitsu America, Inc.
Based on virtualization and automation technologies customers move away from dedicated resources towards virtual or shared environments
Services and operations are increasingly important Integration and orchestration of resources are key prerequisites for
Dynamic Infrastructures.
IT Moves Towards Shared Dynamic Infrastructures
33
Copyright 2009 Fujitsu America, Inc.
Dynamic Infrastructures Transform assets into dynamic
resource pools Assign server, storage and network
resources to business processes on demand
Deliver complete, end-to-end integrated IT Provisioning of alternative delivery
models for IT to consequently address the unique needs of individual customers
IT based on maximum standardization to ensure end to end connectivity, reliability and scalability
Dynamic Infrastructures
Infrastructureas a Service
Managed Infrastructure
Infrastructure Solutions
Infrastructure Products & Services
Dynamic Cube
The new delivery model of IT
54
Copyright 2009 Fujitsu America, Inc.
Extending the Server portfolio
Decouple compute power and storage
Higher server density withcentralized
infrastructure
Internet Scale-out Datacenters and
Managed domains
PRIMERGY CX1000
Versatile &
Scalable
5
Copyright 2009 Fujitsu America, Inc.
CX1000 Target Customers All companies facing Power, Cooling and Datacenter density problems with current facilities and are looking for cost efficient, massive scale out hardware
Companies in context to cloud or IaaS Offering public infrastructure services based on own software and or virtualized
middleware stack web and managed hoster, IaaS providers Offering dedicated datacenter Tier 1 or 2 outsourcing services like ERP,
CRM or Web 2.0 outsourcers, application service providers System integrators offering turn key cloud platforms to large enterprises
Large Enterprise customers or in-house IT providers: Who need to provide large scale-out web or Tier 2 application services In-house IT providers who need to offer IaaS services
Compute clusters or HPC Companies running compute clusters for simulation, data mining, modeling etc. Scientific research
6
Copyright 2009 Fujitsu America, Inc.
Product Overview
Standard switchesBrocade or others Switch-agnostic
network transparency Fujitsu integration
of standard switches Shared or dedicated
management LAN Pre-cabling @ factory
Cloud server nodesCX120 S1 Simple, very cost
efficient rip & replace design
Flexibility and adaptable standard boards
Fan less & energy efficient components
2 socket IntelXeon 5600 series
Customizable
Compute infrastructureCX1000 S1 Simple, very energy efficient
Rack Infrastructure Innovative shared cooling
solution Cool-CentralTM
Rack with 38 fan-less nodes/ back-to-back setup
I/O at front side, no doors
PRIMERGY CX1000 – The Cool-CentralTM Architecture
+ +
7
Copyright 2009 Fujitsu America, Inc.
PRIMERGY CX1000 Summary
New Data Center economics with PRIMERGY CX1000
Traditional In-Row Cooling
CX1000Cool-CentralTM
Easy scale-out 100s, 1,000s of nodes > 20% power saving versus rack servers
Up to 40% DC space saving vs. traditional rack servers
Up to 20% CAPEX savingversus rack servers
8
Copyright 2009 Fujitsu America, Inc.
“No InfiniBand Switch Support”
HP data is based on special cases. Fujitsu analysis shows that only HPC systems needing over 40-60 blades and using InfiniBand have systems performance exceeding Gigabit Ethernet use.
Fujitsu’s direction is: If customers want an HPC system
using 60+ blades and request InfiniBand, DO NOT BID!
But also note: Gigabit Ethernet connections
dominate with 60% share of Top PC-cluster HPC systems.
Low-end HPC systems are growing most rapidly. Typical Windows Compute Cluster (WCC) blade systems use less than 10 blades.
Infiniband Switches are very expensive.
“If a HPC system requires over 4-8 nodes,
InfiniBand matters.” (HP Statement)
Gbit is fastest growing, largest segment
The Counter
9