Post on 24-Dec-2015
Appro Products and SolutionsAppro Products and Solutions
Anthony Kenisky, Vice President of Sales
Appro , Premier Provider of Scalable Supercomputing Solutions: 4/16/09Appro , Premier Provider of Scalable Supercomputing Solutions: 4/16/09
APPRO CORPORATE PRESENTATION
Competitive Advantages:: Technology Differentiation
No need for LWK (Light-Weight-Kernel) which allows system to run a wider range of commercial applications
Diskless Operation withStd Linux OS support
ACE™ Cluster Management Software to simplify management of large, complex systems
Complete Cluster Management
System can be partitioned to support multiple operating environments to provide flexibility in 3rd party applications support
Dynamic Virtual ClusterOperation
Directed Airflow, an innovative air cooling system architecture which supports very dense systems and lowers facility cooling costs
Packaging
Scalable dual-rail, 3D Torus Topology, providing superior network communications performance
Network CommunicationsPerformance
APPRO CORPORATE PRESENTATION
GreenBlade™ SystemHyperClusters™Xtreme™ SuperComputer
:: Diverse Product Line
Appro Product Portfolio
Performance and availability for Large-Sized Deployments
Modular Solution for Small to Mid-Sized Deployments
Flexibility and Choice for Medium to Large-Sized Deployments
APPRO CORPORATE PRESENTATION
• Appro Xtreme-X™ Supercomputer– Balanced Architecture (CPU/Memory/Network)
– Scalability (SW & Network)
– Reliability (Real RAS: Network, Node, SW)
– Facilities (30% less Space, 28% less Power & Cooling)
• Appro GreenBlade™ System– Price/Performance/Value
– N+1 90%+ Efficient Power design
– Hot swap blades w/Local Storage option
– Density Advantage 10 nodes/80 core per system
• Appro HyperClusters™– Cluster based on GreenBlade systems or rack-mount servers
– Choice of CPU’s or GPU’s
– Standard 19” rack support
– Price/Performance per watt, reliability and flexibility
:: Feature Summary
Product Portfolio
APPRO CORPORATE PRESENTATION
:: Performance and Availability
Xtreme-X™ Supercomputer
Management NettowkGbE Switch (Dual Rail)
36-port QDR Infiniband Switch
16 Compute nodes’ Subrack
• Power/Cooling efficient solution with lower TCO
• 512 cores based nodes/Rack
• Peak Performance/Rack: 6.1TF
• Memory Capacity/Rack: 1.5TB
• Dual Rail Management Network
• Efficient Cooling Architecture : 28.8KW/Rack
• Redundant PSU 90% Efficiency, 5+1, PFC
• Redundant Fan: Up to 460W power/Node
• Appro Cluster Engine™ Management Software
• Complete package with improved (RAS)
• Ideal for large deployments, > 1000 nodes
• Scales up to 1000TF of computing power
APPRO CORPORATE PRESENTATION
• Improved cooling efficiency increasing computing capacity up to 28% with the same amount of cooling
• 100% of AC flow is forced through the entire system increasing system cooling efficiency
• Reduced total cost of ownership and increased computing capacity
• Improved cooling efficiency increasing computing capacity up to 28% with the same amount of cooling
• 100% of AC flow is forced through the entire system increasing system cooling efficiency
• Reduced total cost of ownership and increased computing capacity
:: Directed Airflow Cooling Methodology
Top View –Datacenter Floor Space
Top View –Datacenter Floor Space
Xtreme-X™ Supercomputer
APPRO CORPORATE PRESENTATION
Renault F1 Racing Team
Delivered 38TFs Xtreme-X
Supercomputer for Virtual Wind tunnel
project
Customer Quote “Appro not only offered
us a cost effective solution but they also improved our required technical specification
through better reliability, greater fault
tolerance and redundancy as well as more flexibility with regards to system
scalability.”
Bob Bell,Technical Director, ING
Renault F1 Team
:: Recent Design Deployment
Xtreme-X™ Supercomputer
APPRO CORPORATE PRESENTATION
PA
GE
| 8
:: HPC Building Block Solution
GreenBlade™ System
• Green Architecture– Power-optimized design resulting in
reduction of power consumption of platform cooling sub system
• Increased Density– Double the density of standard 1U
servers
• Improved R/A/S– Shared N+1 power design
– Shared cooling (3+3) design
– Hot-swappable blades
– Hot-swappable power supplies
– Hot-swappable cooling fans
Front View
Rear View
APPRO CORPORATE PRESENTATION
• Three-year Savings in Power Cost per Server vs. GreenBlade™
– To adequately cool a compact 1U server, typical cooling system of 1U platform is designed as follows:
• Standard 1U uses up to six compact fans to cool one system
• Popular Twin 1U uses six compact fans to cool two systems
– On the other hand, GreenBlade™ uses six larger(120mm), more reliable fans to cool up to 10 systems
– Example below translates to lower TCO for a GreenBlade™ solution over competition
Power Draw @ peak (W)
Power Draw (kWh) – 3 yrs
Power Draw & Cooling (kWh)**
3 yr cost of pwr & cooling in US
($)*
Additional Cost over
GreenBlade™
Total Additional Cost for 1000
servers
GreenBlade™ 297.2W 7,810 12,496 $1,624 - -
1U Twin Server 330.2W 8,677 13,883 $1,804 $180 $180,000
1U Standard Server
366.5W 9,631 17,335 $2,253 $629 $629,000
* Power cost calculated at $0.13/kWh as per Dept of Energy Commercial power cost national estimate report in 2007** Cooling power requirement estimated at 60% of total power supplied
:: Benefits vs. standard 1U servers
GreenBlade™ System
APPRO CORPORATE PRESENTATION
:: Latest Processor and GPU Technologies
New - Appro HyperClusters™
Appro HyperGreen™ ClusterPower-Efficient, based on GreenBlade System
Appro HyperPower™ ClusterGPU, Performance Optimized Clusters
Coming May/2009
• Based on an open architecture designed for medium to large deployments
• Delivers performance, scalability and greater flexibility in a dense cluster architecture
• Offers the latest processor and GPU computing technologies
• Open cluster management options in an economical and power efficient compute platform
• Choice of server, networking and software with a variety of configuration options
• Tested and pre-integrated solution deployed as a complete package
• HPC professional services and support available
• Based on an open architecture designed for medium to large deployments
• Delivers performance, scalability and greater flexibility in a dense cluster architecture
• Offers the latest processor and GPU computing technologies
• Open cluster management options in an economical and power efficient compute platform
• Choice of server, networking and software with a variety of configuration options
• Tested and pre-integrated solution deployed as a complete package
• HPC professional services and support available
APPRO CORPORATE PRESENTATION
Specifications
:: Maximum Rack Configuration
HyperGreen™ Cluster
• Cluster Solution based on the Appro GreenBlade System
• Up to 80 DP GreenBlades™/640 cores per 42U Rack
• Up to 5.12TB of system memory
• Up to 80TB of local storage
• Fully populated rack weighs ~1,749 lbs
• Rack-level power consumption is ~16kW to 32kW
• 20% power reduction per node compared to 1U servers
• Supports multi-configuration and interconnect options
• Standards IPMI or Appro BladeDome remote server mgmt
• Choice of open software solutions using Rocks+ and MOAB
• Ideal for mid to large-sized HPC and high-density computing
APPRO CORPORATE PRESENTATION
• Specifications– Supports up to 8x Sub-Racks– Supports up to 80x DP Compute Blades
• Use one SXB100 or 200 as Master node• Some blades can be configured as network
storage nodes for the cluster– Supports up 7.6TF per rack– Supports up to 5.12TB of system memory– Support up to 160x internal 2.5” HDDs, equal
to 80TB of local storage– 2RU rack space left for switches– IB switches should be installed in a separate
infrastructure rack– Fully populated rack weighs ~1,749 lbs– Depending on system configuration, rack-level
power consumption is 16kW to 32kW
:: One Scalable Unit – Cluster Recipe
HyperGreen™ Cluster
APPRO CORPORATE PRESENTATION
7U 144p IB SW
1U LCD KVM2U Mgmnt Server
:: 136 Node- Sample Cluster Configurations
• Specifications – 136 Node Cluster– 14x Sub-Racks
– 140x DP Compute Blades
– 4x spare Compute Blades
– 13TF Cluster (using 3GHz CPU)
– 6.5TB of system memory
– 2U Management Node
– 1U Rackmount LCD/KVM
– 7U 144p DDR IB Switch
HyperGreen™ Cluster
APPRO CORPORATE PRESENTATION
14U 288p IB SW
2U Mgmnt Server
:: 288 Node- Sample Cluster Configurations
• Specifications – 288 Node Cluster– 29x Sub-Racks
– 290x DP Compute Blades
– 2x spare Compute Blades
– 27.6TF Cluster (using 3GHz CPU)
– 13.8TB of system memory
– 2U Management Node
– 14U 288p DDR IB Switch
HyperGreen™ Cluster
APPRO CORPORATE PRESENTATION
30U864p IBSwitch
2U Mgmnt Server
:: 864 Node- Sample Cluster Configurations
• Specifications – 864 Node Cluster– 88x Sub-Racks– 880x DP Compute Blades– 16x spare Compute Blades– 82.9TF Cluster (using 3GHz CPU)– 41.4TB of system memory– 2U Management Node– 30U 864p DDR IB Switch
HyperGreen™ Cluster
Appro Products and SolutionsAppro Products and Solutions
Anthony Kenisky, Vice President of Sales
Appro , Premier Provider of Scalable Supercomputing Solutions: 4/16/09Appro , Premier Provider of Scalable Supercomputing Solutions: 4/16/09