Post on 28-May-2020
© 2007 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technology for better business outcomes
Enhancing CAE With HP and Platform Computing
Dennis Ang
High Performance Computing
6 November
© 2007 IDC
Top Trends in HPCTop Trends in HPC
HPC hit an all-time high of $11.5 billion in 2007
� 19% CAGR in server factory revenue over last 4 yearsX86 and Linux still dominate – no surprise
� Clusters continue to gain shares across all segments Mid- to low-end of the market continue to fuel the growthBlades are making in-road to all segments
� Oil/Gas, CAE, DCC and EDA are high growth areasNew challenges for datacenters:
� Power, cooling, real estate, system management, system consolidation
Storage and data management continue to grow in importance
IDC data; market remains hot; growing in strategic importance.
6 November
© 2007 IDC
HP leads
• HPC market overall 4 years running
• Cluster space, driving HPC market
• Blades, the optimal clusters solution
CY07 HPC by vendor
IBM 30%
Other 19%
DELL 14%
HP 37%
CY07 HPC clusters by vendor
Other 16%
DELL 24%
HP 36%
IBM 24%
HP leading the HPC marketHP leading the HPC market
Source: IDC Q2CY08 QVIEW (IDC SEP 2008)
HP CAE Reference Architecture
computeclusters
computeclusters
computeclusters
computeclusters
DAS(or use SFS)
Client Workstations
Job scheduler
computeSMPs
computeclusters
computeclusters
pre/postSMP
ScalableFile Share
LAN
InfiniBand switched fabric interconnect
Remote WorkstationsRGS
visualizationcluster
Modeling
(CAS, Clay, CAD)
Pre-processing
(meshing)Simulation
Post-processing
(visualization) Result
Morphing: surface or volumetric
Shape mod of CAS/CAD data
(* = emerging capability)
DP = double precision math
CAE Domain: Pre/Post Structures Impact Fluids
Parallelized Serial (SMP*) SMP (MPI*) HP-MPI HP-MPI
Typical Job Scalability
(# cores per 1 job)1 (4*) 1 – 4 (8*) 2 – 64 4 – 256
Multi-core WS Solutions 2 to 4 Cores 4-8 Cores 8 Core 8 Core
Typical WS Memory config2 – 8GB
per core
4 – 8GB per core
4 – 8GB per core
4 – 8GB per core
Disk recommendations SASi SAS SAS SAS
Server Solutions SMP serverIntegrity
(if DP, Integrity)x64 cluster
(if DP, Integrity)x64 cluster
Typical Memory (Server)32 – 128 GB per
node16 – 64 GB per node
2 GB /core(if DP, 4 GB)
1-2 GB /core
Typical Disk (server) NAS / SANStriped Array or SFS for Scratch
1x146GB /node(mirror option)
1x146GB /node(mirror option)
CAE Solution Characteristics
HPC Server Platforms
HP ProLiantServers
HP BladeSystemc-Class Servers
HP Integrity Servers
The world’s best-selling rack-mountable servers, meeting a broad range of HPC applications requirements
The ideal platform for HPC clusters, delivering performance, density, efficiency and manageability.
Highly scalable servers for large memory, SMP and database work-loads, in mission-critical HPC environments
77 6 November 2008
Unified Cluster Portfolio
HPC cluster services
HPC application, development and grid software portfolio
Scalable visualization
Scalable data management
HP cluster platformsHP ProLiant and Integrity servers, HP BladeSystem, multiple interconnects
Operating environment and OS extensions
HP_UX Linux Windows
Cluster management layer
ClusterPackXC, CMU and Insight
ControlMicrosoft Windows
CCSPartner software
November 6, 2008 8
BladeSystem c-Class has been embraced in the marketplace
3rd “Best in Show” in a row as voted by attending CIO’s of midmarket companies
Midmarket Summit
Tech innovator of the Year award Server Hardware category for the second year in a row.
Seven quarters of clear leadership since launch
HP's Virtual Connect Architecture Wins Product of the Year Award!
Data from Gartner Server tracker Q108, x86 Blades
47.2%
53.3%
32.3%
24.8%
8.0% 8.4%
0%
10%
20%
30%
40%
50%
60%
Q306 Q406 Q107 Q207 Q307 Q407 Q108 Q208
HP
IBM
Dell
Sun
BladeSystem c3000Winner in Best Blade System Category 2008
176 of the top 500 supercomputers in the world run on c-Class
(TOP500 June 08 list)
9 6 November 2008
HP ProLiant BL2x220c G5
BL2x220c G5
Features per node - 2 nodes per blade
ProcessorUp to two Dual or Quad-Core
Intel® Xeon® processors
Memory
• Registered DDR2 (533/667 MHz)
• 4 DIMM sockets
• 16GB max memory with 4GB DIMMs
Internal Storage 1 Non Hot-Plug SFF SATA HDD
Networking 2 integrated Gigabit NICs
Mezzanine Slots 1 PCIe mezzanine expansion slot (x8, Type I)
Management Integrated Lights Out 2 Standard Blade Edition
Density16 server blades in c7000 (10U enclosure)
8 server blades in c3000 (6U enclosure)
New
A historical perspective
Peak performance: 12,288 Gflop/s
Weight: 106 Tons (w/ 160 TB storage)
Power: 3MW
Cost: $110 million
Eight years ago: ASCI White
•#1 on the Top500 in June 2001
Today: one rack full of Bl2x220c
Peak performance: 12,288 Gflop/s
Weight: more than100x lighter
Power: more than100x less power
Cost: more than 100x lower cost
11
From rack-mount to blade
BladeSystem Advantage
Power: up to 30% saving
Floor space: from 8 racks to 5 racks
Network cables: up to 78% less
And excellent manageability!
Example configuration:
256-node cluster
w/ InfiniBand
Cluster Platform Workgroup System(CPWS)
• A winner for the mid-market and volume cluster space
− 1.5Tflop/s peak performance in 2 sq. ft. floor space
• 3.0Ghz, 4 flops/cycle, 128 cores
− 3x4x128=1536Gflop/s=1.5Tflop/s
− No special power & cooling required*
− GigE and IB – no network cables
* Site planning may be required to make sure there is sufficient power draw from the circuitry
13
HP Cluster Platform Workgroup System• Up to 8 server blades in HP
BladeSystem c3000 -- a simple, compact design with customer needs in mind
− 809 Gflops in a 2 sq ft footprint
− Plug into 110V or 220V wall power
− No special cooling required
• Gigabit Ethernet and InfiniBandinterconnect options
• Choice of Linux cluster systems or Microsoft Windows CCS
• Ideal for compute intensive workloads in CAE , …
Up to 64 cores of processing power to tackle serious workloads
BL46Xc
SB40c
BL46Xc
Compute node
Control/compute nodes
Expansion room for more compute
nodes
BL46Xc
BL46Xc
Storage blade
Windows CCS or Linux
14
Compact and hassle-free packages
• c3000 – mount into standard racks
• Standalone option
15
Why HP-MPI for cluster computing?
Features ISV & End User Benefits Results
Portability Application independence from switch, OS, CPU type, development tools ...
Fewer qual tests; more productive engineers
Robustness Bulletproof run time execution; new versions backward compatible
Increased system utilization factors
Performance Parity or performance gain over alternative MPIs
HP-MPI is high performance MPI
Support by HP Superior support to public domain or other commercial MPI libraries
Fast, dependable response to issues
Applications Broad ISV adoption ensures application availability on widest choice of platforms
User applications easily installed
16
ISVs are standardizing on HP-MPI
Powerful Solver Technology
AMLS
MolproUniversity of Cardiff
17
Why HP & ANSYS?
• Broadest Choice of Cluster Solutions
− HP is only hardware vendor to offer cluster solutions using all 3 industry-standard architectures ANSYS has chosen to support: Opteron/Xeon/IA-64
• Strong Technical and Customer-Focused Partnership
− ANSYS distributes HP-MPI for both HP and non-HP Linux clusters
− Expert HP Solution Engineers for ANSYS Mechanical, FLUENT, and ANSYS CFX have assisted on porting, optimization, and customer support for over a decade
− Leadership in CAE from both companies guarantees joint global sales & support is strong for our customers
• Performance Leadership
− Reliable history of consistent leadership and gains in raw performance and price/performance
Advancing innovation through high-productivity computing
• Leading innovation
− Performance
− Efficiency
• Standards-based economies
− Simplicity
− Affordability
• Time-proven confidence
− Expertise
− Reliability