Compute Blades

17
Pinnacle Compute Blades Intel Xeon and AMD Opteron 2-in-1U compute blades advanced clustering technologies www.advancedclustering.com • 866.802.8222

description

This shows details of our primary compute nodes used in our Apex Clusters.

Transcript of Compute Blades

Page 1: Compute Blades

Pinnacle Compute BladesIntel Xeon and AMD Opteron 2-in-1U compute blades

advanced clustering technologieswww.advancedclustering.com • 866.802.8222

Page 2: Compute Blades

what is a compute blade?

• Features and benefits of a modular “blade” system, but designed specifically for the needs of HPC cluster environment

• Consists of 2 pieces:

• Blade Housing - enclosure to hold the compute modules, mounted into the rack cabinet

• Compute Blade Module - complete independent high-end server that slides into the Blade Housing

2

Page 3: Compute Blades

blade housing

1U enclosure that holds 2x Compute blades, standard 19” design - fits in any cabinet.

3

Page 4: Compute Blades

blade module

Complete independent compute node - contains CPUs, RAM, disk drives and power supply

4

Page 5: Compute Blades

compute blade key points

2x Computing power in the same space

80+ efficient power supplies, low power CPU and drive options

Each compute node is modular, removable and tool-less

Each node is independent - no impact on other nodes

Nodes equipped with management engine: IPMI and iKVM

Mix and match architecture in same blade housing

5

Page 6: Compute Blades

compute blade vs 1U twin

Twin system• Nodes fixed into enclosure

• Must take both nodes down even if servicing only 1 node

• Single shared power supply

Our Compute Blade• Individual removable nodes

• Nodes can run outside of housing for testing and serviceability

• Dedicated 80%+ efficient power supply

• Mix and match CPU architectures

6

Page 7: Compute Blades

blade product highlights

• High density without sacrificing performance

• High reliability - independent power supply and removable blade modules

• Easy serviceability - each module is removable and useable without blade housing

• Tool-less design for easy replacement of failed components

• Multiple system architectures - available with both AMD Opteron and Intel Xeon

7

Page 8: Compute Blades

compute blade - front

1 Power LED 4 Slide out ID label area

2 Power Switch 5 Quick release handles

3 HDD LED

8

Page 9: Compute Blades

compute blade - inside

1 Power supply 4 System memory

2 Drive bay (1x 3.5” or 2x 2.5”) 5 Processors

3 Cooling Fans 6 Low-profile expansion card

blade modules independently slide out of the housing

9

Page 10: Compute Blades

compute blade - features

• Easy swap tool-less fan housing

• Fans and hard drives shock mounted to prevent vibration and failure

• 80% efficient power supply per blade

• Thumbscrew installation for easy replacement

10

Page 11: Compute Blades

11

Standard 1U, dual CPU servers: Max: 42 servers per rack 336 cores

Compute blade servers: Max: 84 servers per rack 672 cores

compute blade - density

Page 12: Compute Blades

compute blade models

1BX5501 1BA2301Processor

Chipset

System Memory

Expansion Slot

LAN

InfiniBand

Manageability

Power supply

Dual Quad-Core Intel Xeon 5500 series Dual AMD Opteron 4 core 2300 or 6 core 2400 series

Intel 5500 chipset with QPI interconnect NVIDIA MCP55 Pro with Hyper Transport

Maximum of 12 DDR3 DIMMs or 96GB Maximum of 8 DDR2 DIMMs or 64GB

1x PCI-e Gen 2.0 x16 1x PCI-e x16

2x 1Gbps RJ-45 ethernet ports 2x 1Gbps RJ-45 ethernet ports

Optional onboard ConnectX DDR Optional onboard ConnectX DDR

Dedicated LAN for IPMI 2.0 and iKVM

Dedicated LAN for IPMI 2.0

80%+ efficient power supply 80%+ efficient power supply

12

Page 13: Compute Blades

compute blade - 1BX5501

• Processor (per blade)

• Two Intel Xeon 5500 Series processors

• Next generation "Nehalem" microarchitecture

• Integrated memory controller and 2x QPI chipset interconnects per processor

• 45nm process technology

• Chipset (per blade)

• Intel 5500 I/O controller hub

• Memory (per blade)

• 800MHz, 1066MHz, or 1333MHz DDR3 memory

• Twelve DIMM sockets for support up to 144GB of memory

• Storage (per blade)

• One 3.5" SATA2 drive bay or two 2.5" SATA2 drive bays

• Support RAID level 0-1 with Linux software RAID (with 2.5" drives)

• Drives shock mounted into enclosure to prevent vibration related failures

• Support for high-performance solid state drives

• Management (per blade)

• Integrated IPMI 2.0 module

• Integrated management controller providing iKVM and remote disk emulation.

• Dedicated RJ45 LAN for management network

• I/O connections (per blade)

• One open PCI-Express 2.0 expansion slot running at 16x

• Two independent 10/100/1000Base-T (Gigabit) RJ-45 Ethernet interfaces

• Two USB 2.0 ports

• One DB-9 serial port (RS-232)

• One VGA port

• Optional ConnectX DDR InfiniBand CX4 connector

• Electrical Requirements (per module)

• High-efficiency power supply (greater than 80%)

• Output Power: 400W

• Universal input voltage 100V to 240V

• Frequency: 50Hz to 60Hz, single phase

13

Page 14: Compute Blades

compute blade - 1BX5501

14

Page 15: Compute Blades

compute blade - 1BA2301

• Processor (per node)

• Two AMD Opteron 2300 or 2400 Series processors (4 core or 6 core processors)

• Next generation “Istanbul” or "Shanghai" microarchitectures

• Integrated memory controller per processor

• 45nm process technology

• Chipset (per node)

• nVIDIA MCP55-Pro

• Memory (per node)

• 667MHz or 800MHz DDR2 memory

• Eight DIMM sockets for support up to 64GB of memory

• Storage (per node)

• One 3.5" SATA2 drive bay or two 2.5" SATA2 drive bays

• Support RAID level 0-1 with Linux software RAID (with 2.5" drives)

• Drives shock mounted into enclosure to prevent vibration related failures

• Support for high-performance solid state drives

• Management (per node)

• Integrated IPMI 2.0 module

• Dedicated RJ45 LAN for management network

• I/O connections (per node)

• One open PCI-Express expansion slot running at 16x

• Two independent 10/100/1000Base-T (Gigabit) RJ-45 Ethernet interfaces

• Two USB 2.0 ports

• One DB-9 serial port (RS-232)

• One VGA port

• Optional ConnectX DDR InfiniBand CX4 connector

• Electrical Requirements (per node)

• High-efficiency power supply (greater than 80%)

• Output Power: 400W

• Universal input voltage 100V to 240V

• Frequency: 50Hz to 60Hz, single phase

15

Page 16: Compute Blades

compute blade - 1BA2301

16

Page 17: Compute Blades

availability and pricing

• Both 1BX5501 and 1BA2301 are available and shipping now

• Systems available online for remote testing

• For price and custom configuration contact your Account Representative

• (866) 802-8222

[email protected]

• http://www.advancedclustering.com/go/blade

17