System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in...

242
System Configuration Guide January 2012 SYSBLADE-11-001-03 January 2012

Transcript of System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in...

Page 1: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

System Configuration GuideJanuary 2012

SYSBLADE-11-001-03

January 2012

Page 2: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE Configuration GuideChapter 1 Blade Enclosure

- SIGMABLADE-H v2- SIGMABLADE-M

Chapter 2 Blade-Express5800/B120b-Express5800/B120a-Express5800/B120b-h-Express5800/B120b-d-Express5800/B120a-d-Express5800/B140a-T-Storage and I/O Blade AD106a/b-Tape Blade AT101a

Chapter 3 Management S/W- SigmaSystemCenter

CPU BladeBlade Enclosure

To Blade Enclosure To Blade

Management S/W

To management S/W

Page 3: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

1. Blade Enclosure

Page 4: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Blade Enclosure

Blade Enclosure

SIGMABLADE-H v2Packs up to 16 CPU Blades in 10U-height chassis

Features- Height: 10U- Supports up to 16 CPU Blades and eight switch modules- SIGMABLADE monitor provided as standard.- Redundant EM Cards, power units, and cooling fans- Each Blade / Switch Module / Power Unit /Enclosure Manager Card (EM Card) / fan are hot-swap

- DVD-ROM and Local KVM switch provided as standard- Power redundancy supported- Support AC 100V- Support 80 PLUS Silver PSU

SIGMABLADE-MPacks up to 8 CPU Blades in 6U height chassis

Features- Height: 6U- Supports up to 8 CPU Blades and 6 switch modules- SIGMABLADE monitor provided as standard.- Redundant EM Cards, power units, and cooling fans- Each Blade / Switch Module / Power Unit /Enclosure Manager Card (EM Card) / fan are hot-swap

- DVD-ROM and Local KVM switch provided as standard- Power redundancy supported- Support 80Plus Gold PSU

Page 5: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-H v2

Blade Enclosure (SIGMABLADE-H v2)

Contents:-SIGMABLADE-H v2 specifications-SIGMABLADE-H v2 Quick Sheet-SIGMABLADE-H v2 basic configuration-Enclosure Manager Card (EM Card)-Switch Module / Pass-Through Card (LAN)-Switch Module / Pass-Through Card (FC)-Blade Enclosure Configuration-Blade Enclosure Configuration-Switch Module connection-Switch Module Configuration-Power Unit & Cooling Fan Configuration-Configuration of the UPS

Page 6: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Blade Enclosure (SIGMABLADE-H v2)

SIGMABLADE-H v2Packs up to 16 CPU Blades in 10U-height chassis

Features- Height: 10U- Supports up to 16 CPU Blades and eight switch modules- SIGMABLADE monitor provided as standard.- Redundant EM Cards, power units, and cooling fans- Each Blade / Switch Module / Power Unit /EM Card / cooling fan are hot-swappable

- DVD-ROM and Local KVM switch provided as standard- Power redundancy supported- AC100V support- Support 80 PLUS Silver PSU

SIGMABLADE-H v2N8405-040AF

1682610

Model NameN-Code

Switch ModuleEM Card

CPU BladeMaximum number ofmodules installed

Power UnitFan Unit

* A set of console devices (display, keyboard, and mouse) are required for each Blade Enclosure.* Recording to optical disk drive is not supported.* About the system configuration, please refer to the system guide of each model.* Power Unit /EM card /FAN is mandatory option. To install additional power units and cooling fans, see 'Power Unit & Cooling Fan Configuration' .* If necessary, prepare a cross cable[K410-84(05)] for each Blade Enclosure.* For standard LAN connection, select from 1Gb Intelligent Switch, 1Gb Pass-Through Card or 10Gb Pass-Through Card.* Front bezel [N8405-051] is a recommended option to protect CPU Blades. * For efficient cooling, any vacant slot must be covered with a Slot Cover. Slot covers are not installed as standard. Purchase them separately to cover all vacant slots.* Update the EM firmware to use the latest options.

Note:

*1 Represents the maximum wattage consumed in the maximum configuration For more information, please refer to "Power Unit & Cooling Fan Configuration" section.*2 When the maximum number of CPU Blades (including NEC Express5800/B140a-T), switch modules, EM Cards, power units, and cooling fans installed. (NOT including Front Bezel)

104pin connector x1

D-Sub 9-pin connectors (unused) x1PS/2 PS/2 connector x2A display D-Sub 15-pin connectors x1

DVD-ROM x1(reading velocity:) DVD : x3 - x8, CD : x10 - x24

Not provided as standard (up to 6)Voltage In AC200V AC 200V - 240V +-10%

In AC100V AC 100V - 120V +-10%50/60Hz +-1Hz

Up to 6EM Card Not provided as standard (up to 2)Fan Unit Not provided as standard (up to 10)

483*823*442 (10U) (protruding objects included)10,231W (AC) / 10,440VA (* 1)

209kg (* 2)During operation: 10 to 35C / 20 to 80% (Non-condensing)

Stored: -10 to 55C / 20 to 80% (Non-condensing)Temperature / HumidityWeight (Maximam configuration)

Power Supply Unit

Size (W X D X H mm)

Number of PlugsFrequency

Maximum Power Consumption

Unit

I/O Interface USBA serial number (COM)

An auxiliary memory

Fan Unit

Page 7: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Blade Enclosure (SIGMABLADE-H v2)

SIGMABLADE-H v2 Quick Sheet

EM slot

Blade Enclosure (SIGMABLADE-H v2)

[N8405-040AF]

COMLAN

CP

U B

lade

slo

t

CP

U B

lade

slo

t

CP

U B

lade

slo

t

CP

U B

lade

slo

t

1 2 15 16

* ・ ・

1

2

3

Power Unit4

Power Unit5

Power Unit6

Power Unit

Power Unit

Power Unit

display Key-board mouse

2

Switch Module slot

Switch Module slot

Switch Module slot

Switch Module slot

Switch Module slot

Switch Module slot

Switch Module slot

Switch Module slot

1

2

3

4

5

6

7

8

* Cross cable[K410-84(05)] (5m)

Fan

Uni

t

Fan

Uni

t

Fan

Uni

t

Fan

Uni

t

Fan

Uni

t

Fan

Uni

t

Fan

Uni

t

Fan

Uni

t

Fan

Uni

t

Fan

Uni

t

DVD-ROM

USB

・ Slot Cover(CPU Blade) [N8405-046]* Mandatory for vacant slot.

EM slot

COMLAN 1

* Enclosure Manager Card (EM Card) [N8405-043] * Mandatory option (at least one card required)

Network module

* Fan Unit [N8405-045] * Mandatory option (at least 4 units required)

displayInterface

Key-boardInterface

mouseInterface

Not attached to the Blade Enclosure. Purchase it separately if necessary.* Display* Keyboard* Mouse

Switch Module slot 1

* Power Unit [N8405-044F]* Mandatory option.

* 1Gb Intelligent L3 Switch [N8406-023A] * 1000BASE-SX SFP module [N8406-024] * 10Gb Intelligent L3 Switch [N8406-026]* 1:10Gb Intelligent L3 Switch [N8406-044]* 10GBASE-SR XFP module [N8406-027]* 1Gb Pass-Through Card [N8406-029]* 10Gb Pass-Through Card [N8406-036]* 10Gb Intelligent L3 Switch [N8406-051F]* 10GBASE-SR SFP+ module [N8406-037]* 1000BASE-T SFP module [N8406-039]* FC SFP Module [NF9330-SF02]* 2/4G FC Pass-Through Card [N8406-030]* FC SFP Module [N8406-015]* 8G FC switch (12 ports) [N8406-040] * 8G FC switch (24 ports) [N8406-042] * 4/8G FC SFP+ Module [N8406-041]

* To use On-board LAN; purchase 1Gb Intelligent Switch, 1:10Gb Intelligent L3 Switch, 10Gb Intelligent L3 Switch,1Gb Pass-Through Card or 10Gb Pass-Through Card.* 1Gb Intelligent Switch, 1:10Gb Intelligent L3 Switch and 1Gb Pass-Through Card can be installed in any switch module slot. 10Gb Intelligent L3 Switch can be installed in the slot 1 to 6. FC Switch and FC Pass-Through Card can be installed in the slot 3 to 8.* Connection of FC Pass-Through Card and external 8GB FC switch is not supported.

5 6 7 811 2 3 4 9 10

* Front bezel [N8405-051]

Page 8: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

* N8405-044F Power Unit* Mandatory option. Please refer to “Power Unit & Cooling Fan Configuration” for more detail.

Blade Enclosure (SIGMABLADE-H v2)

SIGMABLADE-H v2 basic configuration

* N8405-043 Enclosure Manager Card (EM Card) * Mandatory option

Enclosure Manager Card (EM Card)

Power Unit

* N8405-040AF Blade Enclosure (SIGMABLADE-H v2)

* Power Unit, Fan Unit and Enclosure Manager Card (EM Card) are mandatory. For LAN connection, select either 1Gb Intelligent Switch or 1Gb Pass-Through Card.* For efficient cooling, any vacant slot must be covered with a Slot Cover.

Local KVM switch function provided as standardSIGMABLADE monitor provided as standard.

* N8405-045 Fan Unit* Mandatory Option. Please refer to "Power Unit & Cooling Fan Configuration" for more detail..

* N8405-046 Slot Cover(CPU Blade)For vacant CPU Blade slots of Blade Enclosure "SIGMABLADE-H v2" (N8405-040AF).

* For efficient cooling, any vacant slot must be covered with a Slot Cover.

* N8405-051 Front Bezel

Fan Unit

Slot Cover, Bezel

Page 9: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Blade Enclosure (SIGMABLADE-H v2)

SIGMABLADE-H v2 basic configuration

* N8406-030 2/4G FC Pass-Through CardSFP port x16Up to 6 modules can be installed into Blade Enclosure "SIGMABLADE-H v2" (N8405-040AF)Optional FC SFP Module (N8406-015) is necessary for connection with the FC cable

* Connection with 8GB FC switch is not supported.

* N8406-029 1Gb Pass-Through CardLAN port x16Up to 8 modules can be installed into Blade Enclosure "SIGMABLADE-H v2" (N8405-040AF)

* Mandatory Option: select either 1Gb Pass-Through, 10Gb Pass-Through or 1Gb/1:10GbE Intelligent Switch

* N8406-023A 1Gb Intelligent L3 SwitchLAN port 10Base-T / 100Base-TX / 1000Base-T x5Up to 8 modules can be installed into Blade Enclosure "SIGMABLADE-H v2" (N8405-040AF)Optional 1000Base-SX SFP module (N8406-024) is necessary for connection with the 1000Base-SX cable assembly

* Mandatory Option: select either 1Gb Pass-Through, 10Gb Pass-Through or 1Gb/1:10GbE Intelligent Switch

Switch Module

* N8405-040AF Blade Enclosure (SIGMABLADE-H v2)

* Power Unit, Fan Unit and Enclosure Manager Card (EM Card) are mandatory. For LAN connection, select either 1Gb Intelligent Switch or 1Gb Pass-Through Card.* For efficient cooling, any vacant slot must be covered with a Slot Cover.

Local KVM switch function provided as standardSIGMABLADE monitor provided as standard.

* N8406-026 10Gb Intelligent L3 Switch * N8406-026 10Gb Intelligent L3 Switch Up to 2 modules can be installed into Blade Enclosure "SIGMABLADE-H v2" (N8405-040AF)Optional 10GBase-SR XFP module (N8406-027) is necessary for connection with external network

* N8406-040 8G FC switch (12 ports)Outside port x4Up to 4 modules can be installed into Blade Enclosure "SIGMABLADE-H v2" (N8405-040AF)Two FC SFP+ modules are attached as standard. To add SFP module, purchase 4/8G FC SFP+ module (N8406-041).

* N8406-042 8G FC switch (24 ports)Outside port x8Up to 6 modules can be installed into Blade Enclosure "SIGMABLADE-H v2" (N8405-040AF)Four FC SFP+ modules are attached as standard. To add SFP module, purchase 4/8G FC SFP+ module (N8406-041).

* N8406-044 1:10Gb Intelligent L3 Switch External user LAN : 10Base-T/100Base-TX/1000Base-T port x4, XFP port x2Up to 6 modules can be installed into Blade Enclosure "SIGMABLADE-H v2" (N8405-040AF)Optional 10GBase-SR XFP module (N8406-027) is necessary for connection with external 10Gb network

* N8406-036 10Gb Pass-Through CardExternal port x16. Up to 8 modules can be installed into Blade Enclosure "SIGMABLADE-H v2" (N8405-040F). Optional 1000Base-SX SFP module (N8406-024) , 10GBase-SR SFP+ module (N8406-037) or 1000Base-T SFP module (N8406-039) is necessary for connection with external network. Cannot be connected with 10Gb (2ch) adapter (N84003-024).

* Mandatory Option: select either 1Gb Pass-Through, 10Gb Pass-Through or 1Gb/1:10GbE Intelligent Switch

・ N8406-051F 10Gb Intelligent L3 SwitchExternal port x8, up to 6 modules can be installed into Blade Enclosure "SIGMABLADE-H v2" (N8405-040AF). Optional 1000Base-SX SFP module (N8406-024), 10GBASE-SR SFP+ module (N8406-037) or 1000BASE-T SFP module (N8406-039) is necessary for connection with external network.For connection between 10Gb Intelligent L3 Switches, use an optional 10G SFP+ Copper Cable (K410-203(03)) or connect via backplane (adjacent slots only) (exclusive connection)Do not connect with 10GbE Adapter (N8403-024)

* Mandatory Option: select either 1Gb Pass-Through, 10Gb Pass-Through or 1Gb/1:10GbE Intelligent Switch

Page 10: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Blade Enclosure (SIGMABLADE-H v2)

SIGMABLADE-H v2

CPU Blade Slot 1 to 8 (upper row, from left to right) and Slot 9 to 16 (lower row, from left to right)

Front View

SIGMABLADEMonitor

DVD-ROMdrive

USB port

Page 11: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

FAN Slot 1 to 5 (upper row, from left toright)and 6 to 10 (lower row, from left to right)Enclosure Manager

Card (EM Card) 1Enclosure ManagerCard (EM Card) 2

Fan LED

Rear View

Blade Enclosure interconnect port

Blade Enclosure (SIGMABLADE-H v2)

SIGMABLADE-H v2

AC Inlet

Switch Module Slot 1, 3, 5 and7 (from top to bottom)

Switch Module Slot 2, 4, 6 and 8(from top to bottom)

Power Unit 1 to 6 (from left to right)

Power LEDKey-boardconnector

Mouseconnector

Displayconnector

Page 12: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Side Views

823

445483

445

Blade Enclosure (SIGMABLADE-H v2)

SIGMABLADE-H v2

[mm]

442

483

Page 13: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Enclosure Management Card (EM Card)

Features- Enclosure Management (EM) Card for SIGMABLADE-H v2- Monitors the status of SIGMABLADE-H v2- Redundancy enabled by installing two EM Cards

Blade Enclosure (SIGMABLADE-H v2)

N8405-043* Power management* Cooling management* Enclosure management* System status monitoring* Redundancy* Switch Module set-up function

*Power consumption management *Enclosure web console management

Serial Port (COM) D-Sub 9-pin connector x1

LAN RJ-45 connector x110BASE-T/100BASE-TX (support only automatic negotiation)

Enclosure Management Card (EM Card)Model NameN-Code

Interface

Features

5.5W(DC)Power ConsumptionWeight 0.3kg

Front View

serial (COM) portManagement LAN Connector

STATUS LED

ID LED

ACTIVE LED

RESET Switch

SPEED LEDLINK/ACT LED

►At least one EM card required for each Blade Enclosure (SIGMABLADE-H v2).►If there is no EM Card in a Blade Enclosure, you cannot power on the Blade Enclosure.►The Management LAN Connector supports auto negotiation only. Please be careful to the setting of the external switch module connected to this connector.

Important

ID Switch

Page 14: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Enclosure Management Card (EM Card)■ Enclosure web console management* The EM Card web console displays a list of the SIGMABLADE-H v2/-M enclosures that are installed in the same rack cabinet. Each enclosure has its own web console available to view from the list.

■ Power consumption management* You can set the maximum power consumption for each enclosure by using the EM Card and monitor to keep power consumption below the maximum allowed wattage.In addition, you can set Maximum Power Consumption value for the whole blade enclosure units

group that you put on the same Rack, and can manage the Power Consumption.

Blade Enclosure (SIGMABLADE-H v2)

■ Virtual IO control* Virtualize the MAC address / WWN / UUID / serial number of CPU blades.

Note* vIO control function is supported by B120a, B120a-d, B120b, B120b-d, B120b-h*1 AD106a, AD106b

and AT101a only.* vIO control function is supported by following cards only.- N8403-018 Fibre Channel Controller (2ch/4Gbps)- N8403-018 Fibre Channel Controller (2ch/4Gbps)- N8403-034 Fibre Channel Controller*2 (2ch/8Gbps)- N8403-021 1000BASE-T (2ch) (for iSCSI)- N8403-022 1000BASE-T (4ch) (for iSCSI)

*1 : Not supported in PXE Boot configuration*2 : Latest BIOS/EMFW are required

■ EM card function list

descriptionEMFW 04.30 or

olderEMFW 05.00 or

laterPower management OK OK

Cooling management OK OK

Enclosure management OK OK

System information monitoring OK OK

Redundant configuration OK OK

Switch module configuration OK OK

Blade enclosure linkage OK OK

Power capping OK OK

vIO control NG OK

Page 15: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

1Gb Intelligent L3 Switch

Features- Layer-3 switch module with RJ45/SFP module.- easy-to-use network configuration mode (SmartPanel)

Blade Enclosure (SIGMABLADE-H v2)

Switch Module / Pass-Through Card (LAN)

1Gb Intelligent L3 SwitchN8406-023A

Switching capacity 48GbpsMAC addressForwarding method

Forwarding rate

RJ-45(10Base-T/100Base-TX/1000BASE-T)

5 ports(Four ports are exclusive with SFP

modules)SFP module (1000BASE-SX) 4 ports (exclusive with RJ-45)

user port

Store and Forwardup to 8K address (per module)

1,488,095pps per 1portExternal port: Max. 7.4Mpps (5 x 1,488,095)

Model NameN-Code

performance

InterfaceSerial Port D-Sub 9 pin x1

Interconnect (modules) 1000BASE-X 2 portsInterconnect (CPU Blades) 1000BASE-X 16 ports* Layer 2 Switch function * jumbo frame (up to 9K)) * 802.1x* Layer 3 Switch function * port mirroring * RADIUS* VLAN (Port/Tag/Private) * AutoMDI/MDI-X * TACACS+* Spanning tree (STP, RSTP, MSTP, PVRST) * SNMP v1,v2c,v3* Link aggregation (static, LACP)) * IGMP snooping (v1, v2) * RMON (Group 1,2,3,9)* Trunk failover (original) * NTP client * routing protocol* QoS * DNS client (static, RIPv1, RIPv2, OSPF)* ACL (Access Control List) * syslog * VRRP

GUI Web Console

CLItelnet/sshConsole (serial (D-Sub 9pin connector))

Supported MIB

SNMPv1(RFC 1157)MIB-II(RFC 1213)BridgeMIB(RFC 1493)Interface MIB(RFC 2863)Ethernet MIB(RFC 1643)RMON v1(RFC 1757) Group 1,2,3,9

IEEE802.3 , IEEE802.3u , IEEE802.3ab , IEEE802.3ac , IEEE802.3ad , IEEE802.1D , IEEE802.1w , IEEE802.1s , IEEE802.1Q

ManagementFunctions

Management interface

Specifications

Features

Size 1 slot width

Up to 30W (DC)Power ConsumptionWeight 2kg

Page 16: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

A

1Gb Intelligent L3 Switch

Front View

ID LEDSerial Port

user port (RJ-45, SFP module)

Blade Enclosure (SIGMABLADE-H v2)

Switch Module / Pass-Through Card (LAN)

RESET SwitchSTATUS LED

Serial PortLINK/ACT LED SPEED LED

►If you need more LAN ports than provided as standard to expand your network connection, optional 1000BASE-T Adapter is required.►Optional cross cable (K410-84(05)) is recommended to configure 1Gb Intelligent L3 Switch.►If you need 1000BASE-SX, an optional 1000BASE-SX SFP module [N8406-024] is required.►The 1000BASE-SX port and the 1000BASE-T port are mutually exclusive.

Important

Page 17: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

1:10Gb Intelligent L3 Switch

Features- Layer-3 switch module with 1Gbps RJ45 x4 and 10Gbps XFP slot x2.

Blade Enclosure (SIGMABLADE-H v2)

Switch Module / Pass-Through Card (LAN)

1:10Gb Intelligent L3 SwitchN8406-044

Switching capacity 122GbpsMAC addressForwarding method

Forwarding rate

RJ-45(10Base-T/100Base-TX/1000BASE-T)

4 ports

XFP module (1000BASE-SR) 2 portsSerial Port D-Sub 9 pin x1

Model NameN-Code

performance

Interfaceuser port

Store and Forwardup to 8K address (per module)

1Gbps port : 1,488,095pps per port, 10Gbps port : 14,880,950pps per portExternal port: Max. 48Mpps

Serial Port D-Sub 9 pin x1Interconnect (modules) 10GBASE-X 1 portInterconnect (CPU Blades) 1000BASE-X 16 ports* Layer 2 Switch function * jumbo frame (up to 9K) * RADIUS* Layer 3 Switch function * port mirroring * TACACS+* VLAN (Port/Tag/Protocol/Private) * AutoMDI/MDI-X * SNMP v1,v2c,v3* Spanning tree(STP/RSTP/MSTP/PVRST) * IGMP snooping (v1, v2, v3) * RMON (group 1,2,3,9)* Link aggregation (static, LACP) * NTP client * routing protocol* Trunk failover (original) * DNS client (static, RIPv1, RIPv2, OSPF)* QoS * syslog * VRRP* ACL (Access Control List) * 802.1x

GUI Web Console

CLItelnet/sshConsole (serial (D-Sub 9pin connector))

Supported MIB

Size 1 slot width

Up to 50W (DC)Power ConsumptionWeight 2kg

ManagementFunctions

Management interface

Specifications

Features

SNMPv1(RFC 1157)MIB-II(RFC 1213)BridgeMIB(RFC 1493)Interface MIB(RFC 2863)Ethernet MIB(RFC 1643)RMON v1(RFC 1757) Group 1,2,3,9

IEEE802.3 , IEEE802.3u , IEEE802.3ab , IEEE802.3ac , IEEE802.3ad , IEEE802.1D , IEEE802.1w , IEEE802.1s , IEEE802.1Q

Page 18: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

RST19 2120

22

24

23

25

N8406-044

1:10Gb Intelligent L3 Switch

Front View

ID LEDSerial Port

user port (RJ-45, XFP module)

Blade Enclosure (SIGMABLADE-H v2)

Switch Module / Pass-Through Card (LAN)

RESET SwitchSTATUS LED

Serial Port

LI NK/ACT LED SPEED LED

►If you need more LAN ports than provided as standard to expand your network connection, optional 1000BASE-T Adapter is required.►Optional cross cable (K410-84(05)) is recommended to configure 1:10Gb Intelligent L3 Switch.►If you need 10GBASE-SR, an optional XFP module [N8406-027] is required.►The connection speed to CPU blades is 1Gbps (not 10Gbps).

Important

Page 19: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Switch Module / Pass-Through Card (LAN)

10Gb Intelligent L3 Switch

Blade Enclosure (SIGMABLADE-H v2)

Features- Layer-3 switch module with SFP+ module slot x8, 10Gb

10Gb Intelligent L3 SwitchN8406-051F

Switching capacity 480GbpsMAC address

Forwarding rate

user port SFP+ port8 ports

(4 ports are mutually exclusive withinterconnect (modules) ports)

Interconnect (modules) Internal port 4 ports(Mutually exclusive with external ports)

Interconnect (CPU Blades) Internal port 16 ports

Model NameN-Code

Interface

up to 128K address (per module)14.8Mpps per portExternal port Max: 118Mpps (8 x 14.8)

performance

Interconnect (CPU Blades) Internal port 16 ports* Layer 2 Switch function * jumbo frame (up to 9K) * 802.1x* Layer 3 Switch function * port mirroring * RADIUS* VLAN (Port/Tag/Protocol/Private) * TACACS+

* AutoMDI/MDI-X * SNMP v1,v2c,v3* Spanning tree(STP/RSTP/MSTP/PVRST) * RMON (group 1,2,3,9)* Link aggregation (static, LACP) * IGMP snooping (v1, v2, v3) * routing protocol* Trunk failover (original) * NTP client (static, RIPv1, RIPv2, OSPF)* QoS * DNS client * VRRP* ACL (Access Control List) * syslog

GUI Web console

CLItelnet/sshSerial console (connect with EM card)

Supported MIB

Size 1 slot width

Up to 100W (DC)Power ConsumptionWeight 2kg

Features

SNMPv1(RFC 1157)MIB-II(RFC 1213)BridgeMIB(RFC 1493)Interface MIB(RFC 2863)Ethernet MIB(RFC 1643)RMON v1(RFC 1757) Group 1,2,3,9

ManagementFunctions

Management interface

Page 20: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

10Gb Intelligent L3 Switch

Blade Enclosure (SIGMABLADE-H v2)

Switch Module / Pass-Through Card (LAN)

RST

-38- -39- -40- -41- -42- -43- -44- -45-

50-5346-49

N8406-051F

RESET Switch

ID LED LI NK LED

user port (SFP+port)

ACT LED

Front View

►For external network equipment connection, 1000BASE-SX SFP module (N8406-024), 10GBASE-SR SFP+ module (N8406-037), or 1000BASE-T SFP module (N8406-039) is required.►Please note that SFP modules are different for 1Gbps and 10Gbps.►For connection between 10Gb Intelligent L3 Switches, use an optional 10G SFP+ Copper Cable (K410-203(03)) or connect via backplane (adjacent slots only).►Connection with 10GbE Adapter (N8403-024) is not supported.►10BASE-T/100BASE-TX is not supported

STATUS LED

Important

Page 21: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

10Gb Intelligent L3 Switch

Blade Enclosure (SIGMABLADE-H v2)

Features- Layer-3 switch module with four ports of 10GBASE-SR as external ports. Both internal and external ports support 10Gbps speed.

Switch Module / Pass-Through Card (LAN)

10Gb Intelligent L3 SwitchN8406-026

Switching capacity

MAC addressForwarding method

10Gb XFP module slot 4 portsSerial Port D-Sub 9 pin x1

Interconnect (CPU Blades) 10Gbps 16 ports* Layer 3 Switch function * IGMP snooping (v1, v2,v3) QoS* VLAN (Port/Tag) * NTP client* Spanning tree (STP, RSTP, MSTP) * DNS client

Model NameN-Code

Performance

Interface

10Gbps per port (20Gbps with full-duplex connection)non-blocking (full wire speed on all connection)

User port

Store and Forwardup to 8K address (per module)

* Spanning tree (STP, RSTP, MSTP) * DNS client* Link aggregation (static) * syslog* Link aggregation (LACP) * RADIUS (Client)* Trunk failover (original) * TACACS+* jumbo frame (up to 9K) * SNMP v1,v2c,v3* port mirroring * RMON (Group 1,2,3,9)

GUI Web console

CLItelnet/sshConsole (serial (D-Sub 9pin connector))

Supported MIB

Size 2 slot width

Up to 70W (DC)Power ConsumptionWeight 3kg

Specifications

Features

SNMPv1(RFC 1157)MIB-II(RFC 1213)BridgeMIB(RFC 1493)Interface MIB(RFC 2863)Ethernet MIB(RFC 1643)80.21Q extension bridge MIB (RFC 2674)Entity MIB (RFC 2037)RMON v1(RFC 1757) Group 1,2,3,9

IEEE802.3 , IEEE802.3u , IEEE802.3ab , IEEE802.3ac , IEEE802.3ad , IEEE802.1d ,802.1p, IEEE802.1w , IEEE802.1s ,IEEE802.1Q,802.3x,802.1x

Management Functions

Management interface

Page 22: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

10Gb Intelligent L3 Switch

Front View

Blade Enclosure (SIGMABLADE-H v2)

Switch Module / Pass-Through Card (LAN)

STATUS LED

ID LEDSerial Port

LI NK LED

user port

ACT LED

An uplink port(XFP module program

unit slot)

►10GbE Adapter (N8403-024) is required for CPU Blade.►This switch is a two slot width switch and can be installed into switch module slots 5/6 or 7/8 only.►Optional cross cable (K410-84(05)) is recommended to configure 10Gb Intelligent L3 Switch.►10GBASE-SR XFP module (N8406-027) is necessary to connect to external network via uplink ports. ►To use both ports of 10GbE Adapter (N8403-024), you must install two 10Gb Intelligent L3 Switch.►This switch supports 10Gbps only. It can’t support 10/100/1000Base-T.

Important

RESET SwitchSTATUS LED ACT LED unit slot)

Page 23: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

1Gb Pass-Through Card

Features- Enables LAN ports provided in the CPU Blades to be used for network connections

Front View

Blade Enclosure (SIGMABLADE-H v2)

Switch Module / Pass-Through Card (LAN)

N8406-029User port 1000BASE-T 16 up to ports (RJ-45)

Size 1 slot width

1Gb Pass-Through Card

InterfaceUL/CSA, CB, FCC, C-Tick, CE, CSA(EMI)Safety Standard

Model NameN-Code

Up to 14W(DC)Power ConsumptionWeight 2kg

Front View

LAN ports

STATUS LED

ID LEDSPEED LED

LINK/ACT LED SPEED LED

LINK/ACT LED

►If you need more LAN ports than provided as standard to expand your network connection, a 1000BASE-T Adapter is required.►If you want to use both ports of the on-board LAN or 1000BASE-T Adapter, you need two modules and install them into adjacent switch module slot.►Only 1000BASE-T is supported for connection to an external network.(Does not support 10BASE-T/100BASE-TX)

Important

RESET Switch

Page 24: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

10Gb Pass-Through Card

Features- Pass-Through card with internal/external 1Gb/10Gb support

Front View

Blade Enclosure (SIGMABLADE-H v2)

Switch Module / Pass-Through Card (LAN)

N8406-036User port

Up to 76W(DC)Power ConsumptionWeight 2kgSize 1 slot width

10Gb Pass-Through Card

Interface

Model NameN-Code

Up to 16 ports

Front View

►To connect external network, choose 1000BASE-SX SFP module (N8406-024), 10GBASE-SR SFP+ module (N8406-037) or 1000BASE-T SFP module (N8406-039) to conform to the LAN port type of the blade .►Cannot be connected to 10GbE adaptor (N8403-024) on blade server.►To connect external devices, route through external switch►Does not support 10BASE-T/100BASE-TX.

Important

ID LEDSTATUS LED

Port STATUS LEDRESET Switch LAN port

LAN port

Page 25: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Switch Module / Pass-Through Card (FC)

8G FC switch (12 ports)

Features- 8G FC Switch Module

Blade Enclosure (SIGMABLADE-H v2)

N8406-040

Shared Memory F / FL / E port support

Size 1 slot width

Features

12(12)

8 / 4

Internal Architecture

* High Speed switching function* Auto routing function* Cascade connection of FC Switches* Zoning function

N-CodePort (Max)

(Non-Blocking)

Interlink Ports / External Ports

Port maximum transfer rate (Gb/s) 8

Front ViewID LED

STATUS LED

RESET SwitchActive LED

PORT STATUS LED

External Device Connection Ports(Open ports: port 17 to 20)

►This 8G FC Switch (12 Port) can support up to 8 CPU Blades. See "Power Unit &Cooling Fan Configuration"►When you use both ports of a Fibre Channel controller (2ch) in the CPU Blade, two FC Switch Modules must be installed in the Blade Enclosure.►Two FC SFP+ modules are attached as standard to connect external FC cables. To add SFP module, purchase 4/8G FC SFP+ module (N8406-041).►This product supports 4Gbps/8Gbps.

Important

Size 1 slot width

Up to 38W (DC)Power ConsumptionWeight 2kg

Page 26: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Switch Module / Pass-Through Card (FC)

8G FC switch (24 ports)

Features- 8G FC switch module

Blade Enclosure (SIGMABLADE-H v2)

N8406-042

Features

24(24)

16 / 8

* High Speed switching function* Auto routing function* Cascade connection of FC Switches* Zoning function

8G FC switch (24 ports)Model NameN-Code

Port (Max)(Non-Blocking)

Interlink Ports / External Ports

Port maximum transfer rate (Gb/s) 8

►When you use both ports of a Fibre Channel controller (2ch) in the CPU Blade, two FC Switch Modules must be installed in the Blade Enclosure.►Four FC SFP+ modules are attached as standard to connect external FC cables. To add SFP module, purchase 4/8G FC SFP+ module (N8406-041).►This product supports 4Gbps/8Gbps.

Important

Shared Memory F / FL / E port support

Size 1 slot width

Up to 38W (DC)Power ConsumptionWeight 2kg

Internal Architecture* Zoning function

Front ViewID LED

STATUS LED

RESET SwitchActive LED

PORT STATUS LED

External Device Connection Ports

Page 27: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Switch Module / Pass-Through Card (FC)

2/4G FC Pass-Through Card

Features- Connects a port of a Fibre Channelcontroller (2ch) in the CPU Blade to an external network.

Blade Enclosure (SIGMABLADE-H v2)

N8406-030User port SFP up to 16 ports

2/4G FC Pass-Through Card

InterfaceUL/CSA, CB, FCC, C-Tick, CE, CSA(EMI)Safety Standard

Model NameN-Code

Size 1 slot width

Up to 39W(DC)Power ConsumptionWeight 2kg

Front View

Communication Ports

STATUS LEDID LED

►When you use both ports of a Fibre Channel controller (2ch) in the CPU Blade, two FC Pass-Through Card must be installed in the Blade Enclosure.►To connect a FC cable, you need an optional FC SFP Module [N8406-015] for each port.►This product supports 2Gbps/4Gbps. Does not support 1Gbps.►Connection with 8GB FC switch is not supported.

Important

RESET Switch

LINK LED

Page 28: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Blade Enclosure Configuration

SIGMABLADE-H v2

Enclosure Management Card (EM Card)

*2EM Slot

Not installed as standard. At least one card is required. Redundancy is enabled by installing two cards.* Enclosure Management Card (EM Card) [N8405-043]* Not provided as standard, please be careful.

Power Unit

*6Not provided as standard, please be careful. Up to six units are allowed in the Blade Enclosure* Power Unit [N8405-044F]* Not provided as standard, please be careful.

Power Slot

*16

CPU BladeBlade Slot

See "CPU Blade Configuration"* Slot Cover(CPU Blade) [N8405-046]* For efficient cooling, any vacant slot must be covered with Slot Cover.

Necessary number : 16 - (Occupied CPU Blade slots by CPU Blades)

Blade Enclosure (SIGMABLADE-H v2)

* Not provided as standard, please be careful.See "Power Unit & Cooling Fan Configuration".

Page 29: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Blade Enclosure Configuration

SIGMABLADE-H v2

*8

Switch ModuleSwitch Slot

*10Not provided as standard, please be careful. Up to 10 units are allowed in the Blade Enclosure* Fan Unit [N8405-045]* Not provided as standard, please be careful.

See "Power Unit & Cooling Fan Configuration"

Fan Slot

Fan Unit

Front BezelNot provided as standard, please be careful.* Front bezel [N8405-051]

See "Switch Module Configuration"

Blade Enclosure (SIGMABLADE-H v2)

Page 30: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Display

*1displayInterface

*1

keyboardInterface

Key-board

Mouse

[Connecting console devices to the enclosure directly]SIGMABLADE-H v2

Mouse interface

Blade Enclosure Configuration

*1

Blade Enclosure (SIGMABLADE-H v2)

FDD/DVD-ROMNot provided as standard. Purchase it if necessary.

* Flash FDD (USB) [N8160-86]- Capacity:1.44MB

Remarks Flash FDD is not provided as standard. For OS re-installation, purchase at least one Flash

FDD per system. For more information, please read the User’s Guide of Flash FDD. Flash FDD does not support disaster recovery functions of the OS and backup software.

USB InterfaceX1

Page 31: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Blade Enclosure (SIGMABLADE-H v2)

Blade Enclosure ConfigurationSIGMABLADE-H v2

x16

CPU BladeBlade Slot

SUV Interfacex1

1:Display I/F x12: USB I/F x2

SUV Cable* One cable is attached to SIGMABLADE-Hv2 (N8405-040) as standard[K410-150(00)]

3:Serial I/F x1

Display

1×1

FDD/DVD-ROM

CPU Blade

SUV cable(60cm)

* Following configuration is recommended when you install the OS

(For maintenance)

[Connecting console devices to the blades directly]

2×2

Not provided as standard. Purchase it if necessary.* Flash FDD (USB) [N8160-86]- Capacity:1.44MB

Remarks Flash FDD is not provided as standard. For OS re-installation, purchase at least one Flash

FDD per system. For more information, please read the User’s Guide of Flash FDD. Flash FDD does not support disaster recovery functions of the OS and backup software.

Page 32: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

[SSU console connection]

Up to eight servers can be connected. By configuring cascade connections of SSUs, up to 64 servers can be connected.

* Server switch unit [N8191-10F]

* cascade connection with the N8191-09/09A server switch unit is possible

Server switch unit

Display interface

SIGMABLADE-H v2

Keyboard interface

Mouse interface

* Switch Unit Connection Cable & PS/2 cable set (1.8m /3m /5m)[K410-119(1A)] (1.8m)[K410-119(03)] (3m)[K410-119(05)] (5m)

Blade Enclosure Configuration

By configuring cascade connections of 8 SSUs, up to 64 servers can be connected.

* Server switch unit [N8191-10F]

Server switch unit ( cascade connection)

Required for SSU cascade connection (only single-stage cascade is supported)* Switch Unit Connection Cable & PS/2 cable set (1.8m)

[K410-119(1A)] (1.8m)

Blade Enclosure (SIGMABLADE-H v2)

Key-board

Display

Mouse

* Display / key-board cable extension[K410-104A(02)] (2m)[K410-104A(03)] (3m)

* Server switch unit [N8191-10F]

* sub-cascade connection with N8191-09/09A server switch unit is not supported

Page 33: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Blade Enclosure Configuration[Connecting a Remote Console]

Enclosure Management Card (EM Card)

[EM Card Major Functions]*Remote KVM function* Virtual media function* Collection of Blade Enclosure status information

(power units, cooling fans, etc.)* CPU Blades status information is collected via NEC ESMPRO Agent.

[Recommended Specification for Management Server]* CPU : Intel Xeon/Pentium/Celeron family processor 1GHz or above* Memory: 512MB or more* HDD capacity: 350MB or more (separate disk space is required for data backup.)* LAN : 1 port* Serial : 1 port* Recommended OS: Windows Server 2008 R2 or later* Browser: Internet Explorer 8.0 or later* If you use "SigmaSystemCenter", check the recommended specifications for

"SigmaSystemCenter".

[Management Server]

EnclosureManagement LAN

X 2* If you use one Enclosure Manager Card (EM Card), one cable is enough.

Blade Enclosure (SIGMABLADE-H v2)

SIGMABLADE-H v2

・ ・

L2 switch

Intelligent L2 Switch

Management server

SIGMABLADE-H v2

Intelligent L2 Switch

NEC ESMPROAgent

NEC ESMPROAgent

NEC ESMPROAgent

EXPRESSSCOPE Engine

EXPRESSSCOPE Engine

EXPRESSSCOPE Engine

Blade EnclosureInformation (FAN, an electric power supply)

Send CPU Blade information

[SIGMABLADE Management LAN Connection Example]

NEC ESMPROManager

Collect memory and disk drive information from a

CPU Blade

EXPRESSSCOPE Engine

Enclosure Manager Card (EM Card)

Collect FAN and Power supply information from the Blade Enclosure

LANNEC ESMPRO

AgentDedicated for Remote KVM

Function

remote KVM function

Page 34: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Installing CPU Blades

CPU Blade consuming one blade slotAny empty slot

Express5800/B120a、Express5800/B120a-dExpress5800/B120b、Express5800/B120b-dExpress5800/B120b-h

Slot

1

Slot

2

Slot

3

Slot

4

Slot

5

Slot

6S

lot 7

Slot

8

Slot

9

Slot

10

Slot

11S

lot 12

Slot

13

Slot

14

Slot

15

Slot

16

Blade Enclosure (SIGMABLADE-H v2)

Slot

2

Slot

1

Slot

4

Slot

3

Slot

6

Slot

5

Slot

8

Slot

7

Slot

10

Slot

9

Slot

12

Slot

11

Slot

14

Slot

13

Slot

16

Slot

15

CPU Blade consuming two blade slotPaired slots as belowSlot 1 - 2, slot 3 - 4, slot 5 - 6, slot 7 - 8Slot 9 - 10, slot 11 - 12, slot 13 - 14, slot 15 - 16Express5800/B120a, Express5800/B120a-d, Express5800/B120b, Express5800/B120b-d or

Express5800/B120b-h equipped with AD106a, AD106b or AT101a. (e.g. : B120a in slot 1, AD106a in slot 2)

Page 35: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Installing CPU BladesCPU Blade consuming two blade slotPaired slots as belowSlot 1 - 9, slot 2 - 10, slot 3 - 11, slot 4 - 12Slot 5 - 13, slot 6 - 14, slot 7 - 15, slot 8 - 16Express5800/B140a-T

Slot

1slot 9

Slot 2slot

10

Slot 3slot

11

Slot

4slot

12

Slot

5slot

13

Slot

6slot

14

Slot

7slot

15

Slot

8slot

16

Blade Enclosure (SIGMABLADE-H v2)

Mix of CPU Blades example

* When you install two-slot height CPU Blade, circled area with dashed line should be filled with two-slot height CPU Blades.

If you install two-slot height CPU Blade into Slot 1-9, you must install two-slot height CPU Blade or Slot Cover(CPU) into Slot 2-10. You cannot install single-slot height CPU Blade into Slot2 and Slot 10.

Slot

7slot

15

Slot

8slot

16

Slot

6

Slot

5

Slot

12

Slot

11

Slot

1

Slot

2

Slot

9

Slot

10

Slot

3

Slot

4

Slot

14

Slot

13

One-slot CPU Blade

Two-slot width CPU Blade

Two-slot height CPU Blade

Page 36: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Switch Module Configuration

Blade Enclosure (SIGMABLADE-H v2)

Switch Module Slot Positions

Switch ModuleSlot 1

Switch ModuleSlot 2

Switch ModuleSlot 3

Switch ModuleSlot 4

Switch ModuleSlot 5

Switch ModuleSlot 6

Switch ModuleSlot 7

Switch ModuleSlot 8

* The slots 1 and 2 are used for LAN connections. Select either 1Gb Intelligent Switch, 10Gb Intelligent L3 Switch (N8406-051F), 1Gb Pass-Through Card or 10Gb Pass-Through Card for these slots. Do not install FC Switch or FC Pass-Through Card in these slots.* Any 1-slot-width switch module must be installed in pairs in the right and left slots of the same row.

1 slot width

2 slot width

Blade Enclosure (SIGMABLADE-H v2)

CPU Blade(Express5800/B120a,B120a-d,Express5800/B120b,B120b-d,Express5800/B120b-h)Storage and I/O Blade(AD106a, AD106b)Tape blade (AT101a)

Expansion slot 1(Type-1 only)

Expansion slot 2(Type-1&2)

Switch ModuleSlot 1

Switch ModuleSlot 2

On-board LANSwitch Module

Slot 3

Switch ModuleSlot 4

Switch ModuleSlot 6

Switch ModuleSlot 8

Switch ModuleSlot 5

Switch ModuleSlot 7

Connecting Mezzanine Slots to Switch Module Slots

* Any 1-slot-width switch module must be installed in pairs in the right and left slots of the same row.Two different switch modules are not allowed in the same row.

* 4G FC Switch (12 ports) and 8G FC Switch (12 ports) can support up to 8 cpu blades. See "Power Unit & Cooling Fan Configuration".

P1 P2P1

P2

* Because two or more CPU Blades share the same switch module, identical mezzanine cards (with identical interface) must be installed in the same mezzanine slot positions across the CPU Blades. When you install a pair of 4G FC Switches in the switch module slots 3 and 4, you cannot use a 1000BASE-T adapter in the mezzanine slot 1 in the B120a,B120a-d, B120b,B120b-d and B120b-h.

P1

P2

P3

P4

Page 37: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Switch Module ConfigurationConnecting Mezzanine Slots to Switch Module Slots

CPU Blade(Express5800/B140a-T)

P2

Expansion slot 4(Type-1&2)

Expansion slot 2(Type-1&2)

Switch Module

Switch ModuleSlot 6

Switch ModuleSlot 8

Switch ModuleSlot 5

Switch ModuleSlot 7

P1

P2

P3

P4

P1

P2

P3

P4

Blade Enclosure (SIGMABLADE-H v2)

Expansion slot 1(Type-1 only)

Switch ModuleSlot 1

Switch ModuleSlot 2

Expansion slot 3(Type-1 only)

P1

P2

P1

P2

Switch ModuleSlot 3

Switch ModuleSlot 4

P1 P2 P3 P4

On-board LAN

On-board LAN

* Because two or more CPU Blades share the same switch module, identical mezzanine cards (with identical interface) must beinstalled in the same mezzanine slot positions across the CPU Blades. When you install a pair of 8G FC Switches in the switch module slots 3 and 4, you cannot use a1000BASE-T adapter in the mezzanine slot 1 and 3 in the B140a-T. If you use 2ch optional mezzanine cards only, mezzanine slots 2 and 4 need not have identical interface.

* Optional mezzanine cards for the mezzanine slots 1 and 3 must have identical interface.* If you use 2ch optional mezzanine cards, mezzanine slots 2 and 4 need not have identical interface.

Only a 1000BASE-T(2ch) connection board and configuration of Fibre Channel controller (2ch) are possible,

Page 38: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Switch Module ConfigurationConnecting Mezzanine Slots to Switch Module Slots (with 10Gb(2ch) adapter N8403-024)

Blade Enclosure (SIGMABLADE-H v2)

CPU Blade(Express5800/B120a,B120a-d, B120b,B120b-d)

Expansion slot 1(Type-1 only)

Expansion slot 2(Type-1&2)

Switch ModuleSlot 1

Switch ModuleSlot 2

On-board LANSwitch Module

Slot 3

Switch ModuleSlot 4

Switch ModuleSlot 5, 6

Switch ModuleSlot 7, 8

P1 P2P1

P2

P1

P2

* Because two or more CPU Blades share the same switch module, identical mezzanine cards (with identical interface) must beinstalled in the same mezzanine slot positions across the CPU Blades.* 10GbE(2ch) mezzanine card (N8403-024) is Type2 product. You cannot install this card in Type1 slot.

Page 39: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Switch Module ConfigurationConnecting Mezzanine Slots to Switch Module Slots (with 10Gb(2ch) adapter N8403-024)

CPU Blade(Express5800/B140a-T)

P2

Expansion slot 2(Type-1&2)

Switch Module

Switch ModuleSlot 5, 6

Switch ModuleSlot 7, 8

P1

P2

Blade Enclosure (SIGMABLADE-H v2)

Expansion slot 1(Type-1 only)

Switch ModuleSlot 1

Switch ModuleSlot 2

Expansion slot 3(Type-1 only)

P1

P2

P1

P2

Switch ModuleSlot 3

Switch ModuleSlot 4

P1 P2 P3 P4

On-board LAN

On-board LAN

* Because two or more CPU Blades share the same switch module, identical mezzanine cards (with identical interface) must be installed in the same mezzanine slot positions across the CPU Blades.* 10GbE(2ch) mezzanine card (N8403-024) can not be installed in Type4 slot.* 10GbE(2ch) mezzanine card (N8403-024) is Type2 product. You cannot install this card in Type1 slot.

Page 40: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-H v2 Connection to LAN

Connection to Switch Module slot 1,2

* 1Gb Intelligent L3 Switch [N8406-023A]Slot 1

Slot 2 * 1Gb Intelligent L3 Switch [N8406-023A]

►Connecting 1Gb Intelligent L3 Switch

Option for 1Gb Intelligent L3 Switch* 1000Base-SX SFP module [N8406-024]

Blade Enclosure (SIGMABLADE-H v2)

Switch Module Configuration

To LAN

To LAN

To LAN

To LAN

►Connecting 1:10Gb Intelligent L3 Switch

To LAN

► Switch module

* The 1000BASE-SX port and the 1000BASE-T port are mutually exclusive.

* To connect to an external device, an optional SFP module is required according to the interface to connect.* For connection between 10Gb Intelligent L3 Switches, use an optional 10G SFP+ Copper Cable (K410-203(03)) or connect via backplane (exclusive connection)

* 1:10Gb Intelligent L3 Switch [N8406-044]Slot 1

Slot 2 * 1:10Gb Intelligent L3 Switch [N8406-044]

Option for 1:10Gb Intelligent L3 Switch* 10GBASE-SR XFP module [N8406-027]

To LAN

To LAN

To LAN

To LAN

* 10Gb Intelligent L3 Switch (N8406-051F)Slot 1

Slot 2 * 10Gb Intelligent L3 Switch (N8406-051F)

►Connecting 1:10Gb Intelligent L3 Switch

Option for 10Gb Intelligent L3 Switch* 10GBASE-SR SFP+ module [N8406-037]* 1000BASE-T SFP module [N8406-039]* 1000Base-SX SFP module [N8406-024]* 10G SFP+ Copper Cable [K410-203(03)]

To LAN

To LAN

To LAN

To LAN

Page 41: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-H v2

►Connecting 1Gb Pass-Through Card

Slot 1

Slot 2

Connection to LAN

To LAN* 1Gb Pass-Through Card [N8406-029]

* 1Gb Pass-Through Card [N8406-029]

Connection to Switch Module slot 1,2

Blade Enclosure (SIGMABLADE-H v2)

Switch Module Configuration

To LAN

► Pass Through Card

►Connecting 10Gb Pass-Through Card

Slot 1 To switch* 10Gb Pass-Through Card [N8406-036]

Slot 2 * 10Gb Pass-Through Card [N8406-036] To switch

Selection mandatory option for 10Gb Pass-Thorough card * 10GBASE-SR SFP+ module [N8406-037]* 1000BASE-T SFP module [N8406-039]* 1000Base-SX SFP module [N8406-024]

* To connect 10Gb pass-through card to external network, choose 1000BASE-SX SFP module (N8406-024), 10GBASE-SR SFP+ module (N8406-037) or 1000BASE-T SFP module (N8406-039) to conform to the LAN port type of the blade .* To use with 10Gb speed, select 10GBASE-SR SFP+ module.* To use with 1Gb speed, select 1000BASE-T SFP module or 1000BASE-SX SFP module.* To connect external devices, route through external switch

Page 42: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-H v2 Connection to LAN

Connection to Switch Module slot 3,4

* 1Gb Intelligent L3 Switch [N8406-023A]Slot 3

Slot 4 * 1Gb Intelligent L3 Switch [N8406-023A]

►Connecting 1Gb Intelligent L3 Switch

Option for 1Gb Intelligent L3 Switch* 1000Base-SX SFP module [N8406-024]

Blade Enclosure (SIGMABLADE-H v2)

Switch Module Configuration

To LAN

To LAN

To LAN

To LAN

►Connecting 1:10Gb Intelligent L3 Switch

► Switch module

* The 1000BASE-SX port and the 1000BASE-T port are mutually exclusive.

* 1:10Gb Intelligent L3 Switch [N8406-044]Slot 3

Slot 4 * 1:10Gb Intelligent L3 Switch [N8406-044]

Option for 1:10Gb Intelligent L3 Switch* 10GBASE-SR XFP module [N8406-027]

To LAN

To LAN

To LAN

To LAN

* To connect to an external device, an optional SFP module is required according to the interface to connect.* For connection between 10Gb Intelligent L3 Switches, use an optional 10G SFP+ Copper Cable (K410-203(03)) or connect via backplane (exclusive connection)

* 10Gb Intelligent L3 Switch (N8406-051F)Slot 1

Slot 2 * 10Gb Intelligent L3 Switch (N8406-051F)

►Connecting 1:10Gb Intelligent L3 Switch

Option for 10Gb Intelligent L3 Switch* 10GBASE-SR SFP+ module [N8406-037]* 1000BASE-T SFP module [N8406-039]* 1000Base-SX SFP module [N8406-024]* 10G SFP+ Copper Cable [K410-203(03)]

To LAN

To LAN

To LAN

To LAN

Page 43: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-H v2

►Connecting 1Gb Pass-Through Card

Slot 3

Slot 4

Connection to LAN

To LAN* 1Gb Pass-Through Card [N8406-029]

* 1Gb Pass-Through Card [N8406-029]

Connection to Switch Module slot 3,4

Blade Enclosure (SIGMABLADE-H v2)

Switch Module Configuration

To LAN

► Pass Through Card

►Connecting 10Gb Pass-Through Card

Slot 3 To switch* 10Gb Pass-Through Card [N8406-036]

Slot 4 * 10Gb Pass-Through Card [N8406-036] To switch

Selection mandatory option for 10Gb Pass-Thorough card * 10GBASE-SR SFP+ module [N8406-037]* 1000BASE-T SFP module [N8406-039]* 1000Base-SX SFP module [N8406-024]

* To connect 10Gb pass-through card to external network, choose 1000BASE-SX SFP module (N8406-024), 10GBASE-SR SFP+ module (N8406-037) or 1000BASE-T SFP module (N8406-039) to conform to the LAN port type of the blade .* To use with 10Gb speed, select 10GBASE-SR SFP+ module.* To use with 1Gb speed, select 1000BASE-T SFP module or 1000BASE-SX SFP module.* To connect external devices, route through external switch

Page 44: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Connection to Switch Module slot 3,4

SIGMABLADE-H v2

Slot 3

Slot 4

Fibre Channel cable (set of two)5m [NF9320-SJ01E]10m[NF9320-SJ02]20m[NF9320-SJ03]50m[NF9320-SJ04]

Connecting Fibre Channel Modules

►Connecting 8G FC Switch

* 8G FC switch (12 ports) [N8406-040]- Two SFP+ modules are attached as standard

* 8G FC switch (24 ports) [N8406-042]- Four SFP+ modules are attached as standard

* 8G FC switch (12 ports) [N8406-040]- Two SFP+ modules are attached as standard

* 8G FC switch (24 ports) [N8406-042]- Four SFP+ modules are attached as standard

Optional FC SFP+ Module[N8406-041] for 8G FC Switch

NEC StorageD1/D3/D4/D8/E1/M

* For NEC Storage products, refer to the configuration guide of NEC Storage or NEC Storage WEB site.

Blade Enclosure (SIGMABLADE-H v2)

Switch Module Configuration

50m[NF9320-SJ04]Storage WEB site. (http://www.nec.com/global/prod/storage/)

* Connection to External FC Switch [N8190-119] is not supported.

Page 45: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Connection to Switch Module slot 3,4

SIGMABLADE-H v2

Slot 3

Slot 4

Connecting Fibre Channel Modules

►Connecting FC Through Card

* 4G FC Pass-Through Card [N8406-030]- SFP modules are required

* 4G FC Pass-Through Card [N8406-030]- SFP modules are required

Optional FC SFP Module[N8406-015] for FC Pass-Through Card

NEC StorageS500/S550/S1500/S2500/

S2900/D1/D3/D8/E1/M

Blade Enclosure (SIGMABLADE-H v2)

Switch Module Configuration

External FC switchNF9330-SS07(8 port)NF9330-SS08(16 port)NF9330-SS011(16 port)NF9330-SS012(32 port)NF9330-SS013(8 port)NF9330-SS014(16 port)NF9330-SS015(24 port)

Fibre Channel cable (set of two)5m [NF9320-SJ01E]10m[NF9320-SJ02]20m[NF9320-SJ03]50m[NF9320-SJ04]

* To connect a FC cable, you need optional FC SFP Module.* No FC SFP Modules are attached to 4G FC Pass-Through Card (N8406-030) as standard. * Connection to External FC Switch [N8190-119] is not supported.* Connection to NF9340-SS017/SS018/SS019/SS024/SS025/SS026 External 8GB FC switch is not supported.

NF9330-SS015(24 port)NF9330-SS016(40 port)NF9330-SS22(8 port)NF9330-SS23(8 port)NF9320-SS21(8 port)NF9320-SS06(16 port)

* For NEC Storage products, refer to the configuration guide of NEC Storage or NEC Storage WEB site. (http://www.nec.com/global/prod/storage/)

Page 46: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-H v2 Connection to LAN

Connection to Switch Module slot 5,6

* 1Gb Intelligent L3 Switch [N8406-023A]Slot 5

Slot 6 * 1Gb Intelligent L3 Switch [N8406-023A]

►Connecting 1Gb Intelligent L3 Switch

Option for 1Gb Intelligent L3 Switch* 1000Base-SX SFP module [N8406-024]

Blade Enclosure (SIGMABLADE-H v2)

Switch Module Configuration

To LAN

To LAN

To LAN

To LAN

►Connecting 1:10Gb Intelligent L3 Switch

To LAN

► Switch module

* The 1000BASE-SX port and the 1000BASE-T port are mutually exclusive.

* 1:10Gb Intelligent L3 Switch [N8406-044]Slot 5

Slot 6 * 1:10Gb Intelligent L3 Switch [N8406-044]

Option for 1:10Gb Intelligent L3 Switch* 10GBASE-SR XFP module [N8406-027]

To LAN

To LAN

To LAN

To LAN

* 10Gb Intelligent L3 Switch (N8406-051F)Slot 1

Slot 2 * 10Gb Intelligent L3 Switch (N8406-051F)

►Connecting 1:10Gb Intelligent L3 Switch

Option for 10Gb Intelligent L3 Switch* 10GBASE-SR SFP+ module [N8406-037]* 1000BASE-T SFP module [N8406-039]* 1000Base-SX SFP module [N8406-024]* 10G SFP+ Copper Cable [K410-203(03)]

To LAN

To LAN

To LAN

To LAN

* To connect to an external device, an optional SFP module is required according to the interface to connect.* For connection between 10Gb Intelligent L3 Switches, use an optional 10G SFP+ Copper Cable (K410-203(03)) or connect via backplane (exclusive connection)

Page 47: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-H v2 Connection to LAN

Connection to Switch Module slot 5,6

Blade Enclosure (SIGMABLADE-H v2)

Switch Module Configuration

► Pass Through Card

Slot 5

Slot 6

Option for 10Gb Intelligent L3 Switch* 10GBASE-SR XFP module [N8406-027]

* 10Gb Intelligent L3 Switch [N8406-026] To LAN

* Up to two 10Gb Intelligent L3 Switch [N8406-026] can be installed into SIGMABLADE-H v2.* 10Gb intelligent L3 switch [N8406-026] can be installed into Switch Module slot 5,6,7 and 8. (one 10GbE switch consumes two slots) If you install 10Gb Intelligent L3 Switch [N8406-026] in Switch Module slot 5,6 or 7,8, you cannot install different products in slot 7,8 or 5,6.* Optional 10GBase-SR XFP module (N8406-027) is necessary for connection with external network (up to 4 ports)* 10GbE adapter N8403-024 is required to be installed in the CPU blade.

►Connecting 1Gb Pass-Through Card

Slot 5

Slot 6

To LAN* 1Gb Pass-Through Card [N8406-029]

* 1Gb Pass-Through Card [N8406-029] To LAN

►Connecting 10Gb Pass-Through Card

Slot 5

Slot 6

To switch* 10Gb Pass-Through Card [N8406-036]

* 10Gb Pass-Through Card [N8406-036] To switch

Selection mandatory option for 10Gb Pass-Thorough card * 10GBASE-SR SFP+ module [N8406-037]* 1000BASE-T SFP module [N8406-039]* 1000Base-SX SFP module [N8406-024]

* To connect 10Gb pass-through card to external network, choose 1000BASE-SX SFP module (N8406-024), 10GBASE-SR SFP+ module (N8406-037) or 1000BASE-T SFP module (N8406-039) to conform to the LAN port type of the blade .* To use with 10Gb speed, select 10GBASE-SR SFP+ module.* To use with 1Gb speed, select 1000BASE-T SFP module or 1000BASE-SX SFP module.* To connect external devices, route through external switch* Standard LAN port of blade server support 1Gbps only.

Page 48: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Connection to Switch Module slot 5,6

SIGMABLADE-H v2

Slot 5

Slot 6

Fibre Channel cable (set of two)5m [NF9320-SJ01E]10m[NF9320-SJ02]20m[NF9320-SJ03]50m[NF9320-SJ04]

Connecting Fibre Channel Modules

►Connecting 8G FC Switch

* 8G FC switch (12 ports) [N8406-040]- Two SFP+ modules are attached as standard

* 8G FC switch (24 ports) [N8406-042]- Four SFP+ modules are attached as standard

* 8G FC switch (12 ports) [N8406-040]- Two SFP+ modules are attached as standard

* 8G FC switch (24 ports) [N8406-042]- Four SFP+ modules are attached as standard

Optional FC SFP+ Module[N8406-041] for 8G FC Switch

Blade Enclosure (SIGMABLADE-H v2)

Switch Module Configuration

NEC Storage D1/D3/D4/D8/E1/M

* For NEC Storage products, refer to the configuration guide of NEC Storage or NEC Storage WEB site. (http://www.nec.com/global/prod/storage/) 50m[NF9320-SJ04](http://www.nec.com/global/prod/storage/)

* Connection to External FC Switch [N8190-119] is not supported.

Page 49: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Connection to Switch Module slot 5,6

SIGMABLADE-H v2

Slot 5

Slot 6

Connecting Fibre Channel Modules

►Connecting FC Through Card

* 4G FC Pass-Through Card [N8406-030]- SFP modules are required

* 4G FC Pass-Through Card [N8406-030]- SFP modules are required

Optional FC SFP Module[N8406-015] for FC Pass-Through Card

NEC StorageS500/S550/S1500/S2500/

S2900/D1/D3/D8/E1/M

Blade Enclosure (SIGMABLADE-H v2)

Switch Module Configuration

External FC switchNF9330-SS07(8 port)NF9330-SS08(16 port)NF9330-SS011(16 port)NF9330-SS012(32 port)NF9330-SS013(8 port)NF9330-SS014(16 port)NF9330-SS015(24 port)

Fibre Channel cable (set of two)5m [NF9320-SJ01E]10m[NF9320-SJ02]20m[NF9320-SJ03]50m[NF9320-SJ04]

* To connect a FC cable, you need optional FC SFP Module.* No FC SFP Modules are attached to 4G FC Pass-Through Card (N8406-030) as standard. * Connection to External FC Switch [N8190-119] is not supported.* Connection to NF9340-SS017/SS018/SS019/SS024/SS025/SS026 External 8GB FC switch is not supported.

NF9330-SS015(24 port)NF9330-SS016(40 port)NF9330-SS22(8 port)NF9330-SS23(8 port)NF9320-SS21(8 port)NF9320-SS06(16 port)

* For NEC Storage products, refer to the configuration guide of NEC Storage or NEC Storage WEB site. (http://www.nec.com/global/prod/storage/)

Page 50: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-H v2 Connection to LAN

Connection to Switch Module slot 7,8

* 1Gb Intelligent L3 Switch [N8406-023A]Slot 7

Slot 8 * 1Gb Intelligent L3 Switch [N8406-023A]

►Connecting 1Gb Intelligent L3 Switch

Option for 1Gb Intelligent L3 Switch* 1000Base-SX SFP module [N8406-024]

* If you need 1000BASE-SX, an optional 1000BASE-SX SFP module is required.* The 1000BASE-SX port and the 1000BASE-T port are mutually exclusive. (cf. chart)* Up to four 1000BASE-SX ports are available

Blade Enclosure (SIGMABLADE-H v2)

Switch Module Configuration

To LAN

To LAN

To LAN

To LAN

► Switch module

* Up to four 1000BASE-SX ports are available* The above three rules apply to all switch module slots.

* 1:10Gb Intelligent L3 Switch [N8406-044]Slot 7

Slot 8 * 1:10Gb Intelligent L3 Switch [N8406-044]

►Connecting 1:10Gb Intelligent L3 Switch

Option for 1:10Gb Intelligent L3 Switch* 10GBASE-SR XFP module [N8406-027]

To LAN

To LAN

To LAN

To LAN

Page 51: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-H v2 Connection to LAN

Connection to Switch Module slot 7,8

Blade Enclosure (SIGMABLADE-H v2)

Switch Module Configuration

► Pass Through Card

Slot 7

Slot 8

Option for 10Gb Intelligent L3 Switch* 10GBASE-SR XFP module [N8406-027]

* 10Gb Intelligent L3 Switch [N8406-026] To LAN

* Up to two 10Gb Intelligent L3 Switch [N8406-026] can be installed into SIGMABLADE-H v2.* 10Gb intelligent L3 Switch [N8406-026] can be installed into Switch Module slot 5,6,7 and 8. (one 10GbE switch consumes two slots) If you install 10Gb Intelligent L3 Switch [N8406-026] in the Switch Module slot 5,6 or 7,8, you cannot install different products in slot 7,8 or 5,6.* Optional 10GBase-SR XFP module (N8406-027) is necessary for connection with external network (up to 4 ports)* 10GbE adapter N8403-024 is required to be installed in the CPU blade.

►Connecting 1Gb Pass-Through Card

Slot 7

Slot 8

To LAN* 1Gb Pass-Through Card [N8406-029]

* 1Gb Pass-Through Card [N8406-029] To LAN

►Connecting 10Gb Pass-Through Card

Slot 7

Slot 8

To switch* 10Gb Pass-Through Card [N8406-036]

* 10Gb Pass-Through Card [N8406-036] To switch

Selection mandatory option for 10Gb Pass-Thorough card * 10GBASE-SR SFP+ module [N8406-037]* 1000BASE-T SFP module [N8406-039]* 1000Base-SX SFP module [N8406-024]

* To connect 10Gb pass-through card to external network, choose 1000BASE-SX SFP module (N8406-024), 10GBASE-SR SFP+ module (N8406-037) or 1000BASE-T SFP module (N8406-039) to conform to the LAN port type of the blade .* To use with 10Gb speed, select 10GBASE-SR SFP+ module.* To use with 1Gb speed, select 1000BASE-T SFP module or 1000BASE-SX SFP module.* To connect external devices, route through external switch* Standard LAN port of blade server support 1Gbps only.

Page 52: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Connection to Switch Module slot 7,8

SIGMABLADE-H v2

Slot 7

Slot 8

Connecting Fibre Channel Modules

►Connecting 8G FC Switch

* 8G FC switch (24 ports) [N8406-042]- Four SFP+ modules are attached as standard

* 8G FC switch (24 ports) [N8406-042]- Four SFP+ modules are attached as standard

Optional FC SFP+ Module[N8406-041] for 8G FC Switch

Blade Enclosure (SIGMABLADE-H v2)

Switch Module Configuration

Fibre Channel cable (set of two)5m [NF9320-SJ01E]10m[NF9320-SJ02]20m[NF9320-SJ03]50m[NF9320-SJ04]

NEC Storage D1/D3/D4/D8/E1/M

* For NEC Storage products, refer to the configuration guide of NEC Storage or NEC Storage WEB site. (http://www.nec.com/global/prod/storage/)

* Only FC card in the mezzanine slot 4 of B140a-T can be connected to 24port FC Switch in Slot7 and 8.Though, if you use 1000BASE-T (4ch) adapter in another CPU Blade in the Blade Enclosure, you can not install FC card to mezzanine slot 4 of B140a-T.

50m[NF9320-SJ04](http://www.nec.com/global/prod/storage/)

Page 53: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Connection to Switch Module slot 7,8

SIGMABLADE-H v2

Slot 7

Slot 8

Connecting Fibre Channel Modules

►Connecting FC Through Card

* 4G FC Pass-Through Card [N8406-030]- SFP modules are required

* 4G FC Pass-Through Card [N8406-030]- SFP modules are required

Optional FC SFP Module[N8406-015] for FC Pass-Through Card

NEC StorageS500/S550/S1500/S2500/

S2900/D1/D3/D8/E1/M

Blade Enclosure (SIGMABLADE-H v2)

Switch Module Configuration

External FC switchNF9330-SS07(8 port)NF9330-SS08(16 port)NF9330-SS011(16 port)NF9330-SS012(32 port)NF9330-SS013(8 port)NF9330-SS014(16 port)NF9330-SS015(24 port)

Fibre Channel cable (set of two)5m [NF9320-SJ01E]10m[NF9320-SJ02]20m[NF9320-SJ03]50m[NF9320-SJ04]

* Only FC card in the mezzanine slot 4 of B140a-T can be connected to 24port FC Switch in Slot7 and 8.Though, if you use 1000BASE-T (4ch) adapter in another CPU Blade in the Blade Enclosure, you can not install FC card to mezzanine slot 4 of B140a-T.* To connect a FC cable, you need optional FC SFP Module.* Connection to External FC Switch [N8190-119] is not supported.* Connection to NF9340-SS017/SS018/SS019/SS024/SS025/SS026 External 8GB FC switch is not supported.

NF9330-SS015(24 port)NF9330-SS016(40 port)NF9330-SS22(8 port)NF9330-SS23(8 port)NF9320-SS21(8 port)NF9320-SS06(16 port)

* For NEC Storage products, refer to the configuration guide of NEC Storage or NEC Storage WEB site. (http://www.nec.com/global/prod/storage/)

Page 54: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Power Unit & Cooling Fan ConfigurationPower Unit (1/2)

Number of power Number of power

Calculate a total DC power consumption

Total DC power consumption (W) = Total DC power consumption per CPU Blade (W) +1,260 (W) (BladeEnclosure Power Consumption *)

* Blade Enclosure Power Consumption includes the power consumption of switch modules and cooling fans installed.

For more information about Total DC power consumption per CPU Blade, see "CPU Blade" of this document.

See the tables below for the number of power units required for your total DC power consumption.

The numbers of power units and cooling fans required depend on the number and type of CPU Blades installed.

■ AC200V

Blade Enclosure (SIGMABLADE-H v2)

Total DC power consumption (W)

Number of power supply units required (N+1 redundancy)

Number of power supply units required (N+N redundancy)

Supplement

Less than 2244 2 2

Please order necessary number of Power Supply Units (Not provided as standard)

Less than 4260 3 4

Less than 6288 4 6

Less than 8304 5 Not supported

Less than 10320 6 Not supported

Total DC power consumption (W)

Number of power supply units required (N+1 redundancy)

Number of power supply units required (N+N redundancy)

Supplement

Less than 1824 3 4

Please order necessary number of Power Supply Units (Not provided as standard)

Less than 2688 4 6

Less than 3552 5 Not supported

Less than 4416 6 Not supported

■ AC100V

Page 55: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Power Unit & Cooling Fan ConfigurationPower Unit (2/2)

* To calculate AC power consumption and apparent power, use this formula.

AC power consumption (W) = Total DC power consumption (W) / 0.87 (200V configuration)

= Total DC power consumption (W) / 0.83 (100V configuration)

Apparent power (VA) = AC power consumption (W) / 0.98

Blade Enclosure (SIGMABLADE-H v2)

Page 56: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Power Unit & Cooling Fan Configuration

Fan Unit (1/2)

You must install the CPU Blades from the left toward the right in the enclosure (Slot 1, 9, 2, 10...).Blade Enclosure with Express5800/B120b-h, Express5800/120Bb-m6, 4G FC Switch (12Port )[N8406-019] or 8G FC Switch (12Port) [N8406-040] is an exception. See next page.

In a usual install sequential order,* When you use CPU Blade slots 1 and 9, the number of cooling fan required is four. * When you use CPU Blade slots 1 to 4 and 9 to 12, the number of coolingfan required is six.* When you use CPU Blade slots 1-16, the number of cooling fan required is eight.* Up to 10 FAN Units can be installed.

Cooling fans is configured as N+1 for redundancy. The Blade Enclosure (SIGMABLADE-H v2) can continue working normally in case of single cooling fan failure. If two cooling fans breakdown, CPU blades and switch modules may stop because of abnormal temperature.If you install 10 FAN Units, rotation speed of fans is slowed down then noise level can be lowered.

The numbers of cooling fans required depend on the number and type of CPU Blades installed.

Blade Enclosure (SIGMABLADE-H v2)

Blade Enclosure (SIGMABLADE-H v2) Front View

Blade S

lot

1

Blade S

lot

2

Blade S

lot

3

Blade S

lot

4

Blade S

lot

5

Blade S

lot

6

Blade S

lot

7

Blade S

lot

9

Blade S

lot

10

Blade S

lot

11

Blade S

lot

12

Blade S

lot

13

Blade S

lot

14

Blade S

lot

15

Blade S

lot

8

Blade S

lot

16

* See users guide to check the correct positions of cooling fans.

* About the occupation number of slots of each CPU Blade product, please refer to CPU Blade of the configuration guide.

* The number of cooling fans does not depend on the number of switch modules and power units.

* FAN Units are not provided as standard. Please purchase necessary number of Fan Unit.

Page 57: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Power Unit & Cooling Fan Configuration

Fan Unit (2/2)

Slot

1

Slot

2

Slot

3S

lot 4

Slot

5

Slot

6

Slot

7

Slot

9S

lot 10

Slot

11

Slot

12

Slot

13

Slot

14

Slot

15

Slot

8

Slot

16

a. If B120b-h and other blades are installed in slot1 or 9, 6 FAN units must be installed.

Note for Express5800/B120B-h CPU Blade

Blade Enclosure (SIGMABLADE-H v2)

Note for Express5800/B120b-h CPU Blade

When you install B120b-h blades, more FAN units are required than other blades.

b. If B120b-h and other blades are installed in the slot1~4 or slot9~12, 8 FAN units must be installed.

c. If B120b-h and other blades are installed in the slot 5~8 or slot 9~12, 10 FAN units must be installed.

Note for Express5800/120Bb-m6 CPU Blade

Express5800/120Bb-m6 is a two-slot width CPU blade. It can be installed into odd-even slot pairs, not into even-odd slot pairs.When you install a 120Bb-m6 into the Slots 1 and 2, you need six cooling fans.

Note for 4G FC Switch(12Port ) [N8406-019] and 8G FC Switch(12Port) [N8406-040]

FC Switch(12Port) can support CPU blades in Slot 1-8. CPU blades in Slot 9-16 cannot connect to FC Switch(12Port).So the installation order for CPU blades and required number of fans are different from the rules on the previous page.

* When you install a CPU blade into the Slot 2, 3 or 4, six cooling fans are necessary.* When you install a CPU blade into the Slot 5, 6, 7 or 8, eight cooling fans are necessary.

Note for Express5800/B120B-h CPU Bladeinstalled. installed.

Page 58: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Blade Enclosure (SIGMABLADE-H v2)

Power Supply Configuration(1) Power supply connection diagram (N+1 for redundancy and use power strip)

Arrange the 6 sets of AC cable and the 3 sets of power strip when configuring maximumnumber of Power Unit and connect them as follows:

SIGMBALADE-H v2

Power unit

Power unit

Power unit

Power unit

Power unit

IEC320-C20

Power unit

AC cable(Local procurement)

Distribution board or an uninterruptible power supply (UPS)

power strip (Local procurement)ATTENTION : DO NOT CONNECT THREE OR (Local procurement)

SIGMBALADE-H v2

Power unit

Power unit

Power unit

Power unit

Power unit

IEC320-C20

Power unit Distribution board or an uninterruptible power supply (UPS)

AC cable (Local procurement)

(2) Power supply connection diagram (N+1 for redundancy and not use power strip)Arrange the 6 sets of AC cable when configuring maximum number of Power Unit.- Need the same number of sockets with breaker of appropriate capacity as the AC cables. - Do not share the same electrical power system with other devices.

ATTENTION : DO NOT CONNECT THREE OR MORE AC CABLES TO EACH POWER STRIP EVEN IF THE POWER STRIP HAS MANY OUTLETS.

Page 59: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Blade Enclosure (SIGMABLADE-H v2)

Power Supply Configuration(3) Power supply connection diagram (N+N for redundancy and use power strip)

Arrange the 6 sets of AC cable and the 4 sets of power strip when configuring maximumnumber of Power Unit and connect them as follows:

SIGMBALADE-H v2

Power unit

Power unit

Power unit

Power unit

Power unit

Power unit

IEC320-C20

Power dispatching with 2 systems

Distribution board or an uninterruptible power supply (UPS)

Distribution board or an uninterruptible power supply (UPS)

AC cable (Local procurement)

(4) Power supply connection diagram (N+N for redundancy and not use power strip)Arrange the 6 sets of AC cable when configuring maximum number of Power Unit.- Need the same number of sockets with breaker of appropriate capacity as the AC cables. - Do not share the same electrical power system with other devices. - To avoid SPOF (Single Point of Failure), use two electrical power systems.

SIGMBALADE-H v2

Power unit

Power unit

Power unit

Power unit

Power unit

IEC320-C20

Power unit

Distribution board or an uninterruptible power supply (UPS)

AC cable (Local procurement)

Distribution board or an uninterruptible power supply (UPS)

Power dispatching from two electrical power systems

power strip (Local procurement)ATTENTION : DO NOT CONNECT THREE OR MORE AC CABLES TO EACH POWER STRIP EVEN IF THE POWER STRIP HAS MANY OUTLETS.

Page 60: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

For efficient cooling, any vacant slot must be covered with a Slot Cover. See information below and purchase appropriate Slot Cover.

* Slot Cover for CPU Blade Slot

Any vacant CPU Blade slot must be covered with a Slot Cover(CPU Blade). The number of Slot Cover is calculated by the formulabelow.

16 - (Occupied CPU Blade slots by CPU Blades *)* See "CPU Blade" section of this document to calculate occupied slots.

The configuration of the Slot Cover

N8405-046 Slot Cover(CPU Blade)

About the Slot Cover arrangement to the available slot of the enclosure unit

Blade Enclosure (SIGMABLADE-H v2)

Page 61: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-M

Blade Enclosure (SIGMABLADE-M)

Contents:-SIGMABLADE-M specifications-SIGMABLADE-M Quick Sheet-SIGMABLADE-M basic configuration-EM Card-Switch Module / Pass-Through Card (LAN)-Switch Module / Pass-Through Card (FC)-Blade Enclosure Configuration-Blade Enclosure Configuration-Switch Module Configuration-Switch Module Configuration-Power Unit & Cooling Fan Configuration

Page 62: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-M

Blade Enclosure (SIGMABLADE-M)

Features- Height: 6U- Supports up to 8 CPU Blades and 6 switch modules- SIGMABLADE monitor provided as standard.- Redundant EM Cards, power units, and cooling fans- Each Blade / Switch Module / Power Unit /EM Cards, cooling fans are hot-swappable- DVD-ROM and Local KVM switch provided as standard- Power redundancy supported

Packs up to 8 CPU Blades in 6U height chassis

N8405-016BFMaximum 8

6245

N-Code

Switch Module

Fan Unit

EM CardPower Unit

CPU Blade

* A set of console devices are required for each Blade Enclosure.* Recording to optical disk drive is not supported.* About the system configuration, please refer to the system guide of each model.* Power Unit/EM card/FAN are mandatory options.* Mandatory options: for LAN connection, select from 1Gb Intelligent L3 Switch, 1Gb Pass-Through Card or 10Gb Pass-Through Card.* To install additional power units and cooling fans, see 'Power Unit & Cooling Fan Configuration'.* For efficient cooling, any vacant CPU Blade slot must be covered with a Slot Cover(N8405-032).

Slot covers are not installed as standard. Purchase them separately to cover all vacant slots.* Update the EM firmware to use the latest options.

Note:

* Represents the maximum wattage consumed in the maximum configuration For more information, please refer to "Power Unit & Cooling Fan Configuration" section.

54pin connector x1

D-Sub 9-pin connector x1PS/2 PS/2 connector x2Display D-Sub 15-pin connector x1

CD-RW/DVD-ROM x1(reading velocity:) DVD : x3 - x8, CD : x10 - x24

powersupply

Not provided as standard (up to 4)

AC200V AC 200V - 240V ±10%50/60Hz ±1Hz

Up to 4Not provided as standard (up to 2)Not provided as standard (up to 5)

484.8*829.0*264.2 (6U) (protruding objects included)5,136W(AC)/5,241VA

125kg

During operation: 10 to 35C / 20 to 80% (Non-condensing)Stored: -10 to 55C / 20 to 80% (Non-condensing)

Fan Unit

Temperature / Humidity

Maximum Weight

Size (W x D x H mm)Maximum Power Consumption (*)

Auxiliary Storage Unit

Fan UnitEM Card

number of plugs

I/O InterfaceSerial Port (COM)USB

module

FrequencyVoltage

Page 63: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-M Quick Sheet

Blade Enclosure (SIGMABLADE-M)C

PU

Bla

de s

lot

CP

U B

lade

slo

t

CP

U B

lade

slo

t

CP

U B

lade

slo

t

CP

U B

lade

slo

t

CP

U B

lade

slo

t

FAN

slo

t

FAN

slo

t

FAN

slo

t

FAN

slo

t

FAN

slo

t1 2 3 6 1 2 3 4 54 5

1

2

3

4

SIGMABLADE-M[N8405-016BF]

built-in DVD-ROM USB

Switch Module Slot 6

CP

U B

lade

slo

t

7

CP

U B

lade

slo

t

8

Switch Module Slot 5

EM

slo

t

EM

slo

t

1 2

・ EM Card [N8405-019]* Mandatory option (at least one card required)

・ Fan Unit [N8405-053]* Mandatory option (at least 3 units required)* Can be mixed with N8405-018

・ Slot Cover [N8405-032]* Mandatory for vacant slot.

・ Front Bezel [N8405-033]

Power Unit

Power Unit

Power Unit

Power Unit

Switch Module Slot 1

* 1Gb Intelligent L3 Switch [N8406-023A]* 1000BASE-SX SFP module [N8406-024]* GbE Expansion Card [N8406-013]* 10Gb Intelligent L3 Switch [N8406-026]* 1:10Gb Intelligent L3 Switch [N8406-044]* 10GBASE-SR XFP module [N8406-027]* 1Gb Pass-Through Card [N8406-011]* 10Gb Intelligent L3 Switch [N8406-051F]* 10Gb Pass-Through Card [N8406-035]* 10GBASE-SR SFP+ module [N8406-037]* 1000BASE-T SFP module [N8406-039]* FC SFP Module [NF9330-SF02]* 2/4G FC Pass-Through Card [N8406-021]* FC SFP Module [N8406-015]* 8G FC Switch (12ports) [N8406-040] * 4/8G FC SFP+ module [N8406-041]

* For LAN connection, select either 1Gb Intelligent Switch, 10Gb Intelligent L3 Switch,1Gb Pass-Through Card or 10Gb Pass-Through Card.* 1Gb Intelligent L2/L3 Switch, 1Gb Pass-Through Card, 10Gb Intelligent L3 Switch (N8406-051F) can be installed in any switch module slot.* FC Switch and FC Pass-Through Card can be installed in the slot 3 to 6.* 1Gb Interlink Expansion Card can use only the slot 3 and 4when 1Gb Intelligent L2/L3 Switches occupy the slot 1 and 2.* 10Gb Intelligent L3 Switch (N8406-026) can be installed in slot 5 and 6 only.* Connection of FC Pass-Through Card and 8GB FC Switch is not supported.

COM

* Cross cable[K410-84(05)] (5m)

Switch Module Slot

Switch Module Slot

2

4Display

Interface

Not attached to the Blade Enclosure. Purchase it separately if necessary.* Display, PS/2 Key-board, Mouse

Key-boardInterface

MouseInterface

Switch Module Slot 3

* Power Unit (2,250W / 80 PLUS Gold) [N8405-055F]* Mandatory option. Cannot be mixed with other type of

Power Unit.* For N+N redundancy configuration only

Page 64: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-M basic configuration

Blade Enclosure (SIGMABLADE-M)

* N8405-016BF Blade Enclosure (SIGMABLADE-M)Local KVM switch function provided as standardSIGMABLADE monitor provided as standard.

* Power Unit, Fan Unit and Enclosure Manager Card (EM Card) are mandatory. For LAN connection, select either 1Gb Intelligent Switch or 1Gb Pass-Through Card.* For efficient cooling, any vacant CPU Blade slot must be covered with a Slot Cover(N8405-032).

The slot covers are not installed as standard. Purchase them separately to cover all vacant slots.

* N8405-019A EM Card* Mandatory option

EM Card

* N8405-055F Power Unit (2,250W, 80PLUS® Gold)* Mandatory Option. Please refer to "Power Unit & Cooling Fan Configuration" for more detail.* Can not be mixed with other type of Power Unit.

Power Unit

* N8405-053 Fan Unit* Mandatory Option. Please refer to "Power Unit & Cooling Fan Configuration" for more detail.* Can be mixed with N8405-018.

* N8405-032 Slot CoverFor vacant CPU Blade slots of Blade Enclosure "SIGMABLADE-M" (N8405-016BF).

* For efficient cooling, any vacant slot must be covered with a Slot Cover.

* N8405-033 Front Bezel

Fan Unit

Slot Cover, Bezel

Page 65: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-M basic configuration

Blade Enclosure (SIGMABLADE-M)

* N8405-016BF Blade Enclosure (SIGMABLADE-M)Local KVM switch function provided as standardSIGMABLADE monitor provided as standard.

* Power Unit, Fan Unit and Enclosure Manager Card (EM Card) are mandatory. For LAN connection, select either Intelligent Switch or Pass-Through Card.* For efficient cooling, any vacant CPU Blade slot must be covered with a Slot Cover(N8405-032).

The slot covers are not installed as standard. Purchase them separately to cover all vacant slots.

* N8406-011 1Gb Pass-Through CardExternal LAN port x16. Up to 3 modules can be installed into Blade Enclosure "SIGMABLADE-M" (N8405-016BF)

* Mandatory Option: select either Pass-Through Card or Intelligent Switch

* N8406-021 2/4G FC Pass-Through CardSFP port x16. Up to 2 modules can be installed into Blade Enclosure "SIGMABLADE-M" (N8405-016BF)

* N8406-013 1Gb Interlink Expansion CardInternal LAN port x16. Up to 1 modules can be installed into Blade Enclosure "SIGMABLADE-M" (N8405-016BF)Can be installed in slot 3,4 only.

Switch Module* N8406-023A 1Gb Intelligent L3 Switch

LAN port : 10Base-T / 100Base-TX / 1000Base-T x5Up to 6 modules can be installed into Blade Enclosure "SIGMABLADE-M" (N8405-016BF)

Optional 1000Base-SX SFP module (N8406-024) is necessary for connection with the 1000Base-SX cable.* Mandatory Option: select either Pass-Through Card or Intelligent Switch

SFP port x16. Up to 2 modules can be installed into Blade Enclosure "SIGMABLADE-M" (N8405-016BF)To connect a FC cable, you need an optional FC SFP Module [N8406-015] for each port.

* N8406-026 10Gb Intelligent L3 SwitchXFP port x4. Up to 1 modules can be installed into Blade Enclosure "SIGMABLADE-M" (N8405-016BF)Optional 10GBase-SR XFP module (N8406-027) is necessary for connection with external network. 10Gb (2ch) adapter (N84003-024) must be installed to the cpu blade.

* N8406-040 8G FC switch (12 ports)Outside port x4Up to 4 modules can be installed into Blade Enclosure "SIGMABLADE-M" (N8405-016BF)Two FC SFP+ modules are attached as standard. To add SFP module, purchase 4/8G FC SFP+ module (N8406-041).

* N8406-044 1:10Gb Intelligent L3 Switch External user LAN : 10Base-T/100Base-TX/1000Base-T port x4, XFP port x2Up to 6 modules can be installed into Blade Enclosure "SIGMABLADE-M" (N8405-016BF)Optional 10GBase-SR XFP module (N8406-027) is necessary for connection with external 10Gb network

* Mandatory Option: select either 1Gb Pass-Through, 10Gb Pass-Through or 1Gb/1:10GbE Intelligent Switch

* N8406-035 10Gb Pass-Through CardExternal port x8. Up to 6 modules can be installed into Blade Enclosure "SIGMABLADE-M" (N8405-016BF). Optional 1000Base-SX SFP module (N8406-024) , 10GBase-SR SFP+ module (N8406-037) or 1000Base-T SFP module (N8406-039) is necessary for connection with external network. Cannot be connected with 10Gb (2ch) adapter (N84003-024).

* Mandatory Option: select either 1Gb Pass-Through, 10Gb Pass-Through or 1Gb/1:10GbE Intelligent Switch

・ N8406-051F 10Gb Intelligent L3 SwitchExternal port x8, up to 6 modules can be installed into Blade Enclosure "SIGMABLADE-H v2" (N8405-040AF). Optional 1000Base-SX SFP module (N8406-024), 10GBASE-SR SFP+ module (N8406-037) or 1000BASE-T SFP module (N8406-039) is necessary for connection with external network.For connection between 10Gb Intelligent L3 Switches, use an optional 10G SFP+ Copper Cable (K410-203(03)) or connect via backplane (adjacent slots only) (exclusive connection)Do not connect with 10GbE Adapter (N8403-024)

* Mandatory Option: select either 1Gb Pass-Through, 10Gb Pass-Through or 1Gb/1:10GbE Intelligent Switch

Page 66: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-M

Front View

Blade Enclosure (SIGMABLADE-M)

Blade Slot (from left to right, 1-8)

Power Unit(from left to right,

1-4)

SIGMABLADEMonitor

DVD-ROMdrive

USB port

Page 67: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-M

Blade Enclosure (SIGMABLADE-M)

Rear View

FAN slot (from left to right, 1-5)

Switch Module Slot Bottom left: slot 1, Bottom right: slot 2Mid. left: slot 3, Mid. right: slot 4Top left: slot 5, Top right: slot 6

EnclosureManagement LAN

Serialport

Displayconnector

Key-boardconnector

Mouseconnector

AC Inlet

Page 68: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-M

Side Views

Blade Enclosure (SIGMABLADE-M)

441.

4

484.8

264.

2

113.

9588

30 777 22

2420

9.7

Page 69: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

EM Card

Blade Enclosure (SIGMABLADE-M)

Features- Enclosure Management (EM) Card for SIGMABLADE-M- Monitors the status of the Blade Enclosure(SIGMABLADE-M)- Redundancy enabled by installing two EM Cards

N8405-019A

Up to 5W (DC)Power ConsumptionWeight 0.2kg

EM cardProduct nameN-Code

Features

* Power management * UPS Management Functions* Cooling management * Enclosure web console management* Enclosure management * Power consumption management* System status monitoring * Remote KVM* Redundancy * vIO control* Switch Module set-up function

►At least one card required for each Blade Enclosure (SIGMABLADE-M).►If you do not install an EM Card, you cannot power on the Blade Enclosure.

Important

Front View

RESETswitch

STATUSLED

ActiveLED

ID LED ID Switch LINK/ACTLED

SPEEDLED

only for maintenanceManagement LAN port

only for maintenanceSerial Port

Fixing Screw

Page 70: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

EM Card

Blade Enclosure (SIGMABLADE-M)

The EM Card provides the following management functionality:

■ Enclosure web console management

■ UPS Management

The EM Card web console displays a list of the SIGMABLADE-M enclosures that are installed in thesame rack cabinet. Each enclosure has its own web console available to view from the list.Important:-A single EM Card supports up to six SIGMABLADE-M enclosures loaded in the same rack cabinet.-To combine the N8405-019 and N8405-019A EM Cards in the same enclosure, use the N8405-019A firmware for the N8405-019 as well. However, when you mix the two EM Cards in the sameenclosure, the web console management is not available. To use this function, use the N8405-019AEM Cards only.

Connecting SNMP cards to EXT port 1 and 2 on the rear side of the enclosure using LAN cablesenables you to configure and manage the UPS settings on the EM Card web console.Important:-The above function does not allow you to schedule power-on/-off times for your system.-Power management software is required for the scheduled operations and power management usingSigmaSystemCenter.

■ Power consumption managementYou can set the maximum power consumption for each enclosure by using the EM Card andmonitor to keep power consumption below the maximum allowed wattage.

Functionality Comparison

■ Virtual IO control* Virtualize the MAC address / WWN / UUID / serial number.Note* vIO control function is supported by B120a, B120a-d, B120b, B120b-d, B120b-h*1 AD106a, AD106b

and AT101a only.* vIO control function is supported by following cards only.- N8403-018 Fibre Channel Controller (2ch/4Gbps)- N8403-034 Fibre Channel Controller*2 (2ch/8Gbps)- N8403-021 1000BASE-T (2ch)- N8403-022 1000BASE-T (4ch)

*1 : Not supported in PXE Boot configuration*2 : Latest BIOS/EMFW are required

Rev.02.01or older Rev.04.30 or older Rev.05.00 or later Rev.04.30 or older Rev.05.00 or later

Power management OK OK OK OK OK

Cooling management OK OK OK OK OK

Enclosure management OK OK OK OK OK

System status monitoring OK OK OK OK OK

Redundancy OK OK OK OK OK

Switch module set-up function OK OK OK OK OK

Enclosure web console management NG NG NG OK OKUPS Management functions NG OK OK OK OK

Poswer capping NG OK OK OK OK

vIO control NG NG OK NG OK

N8405-019AFunction

N8405-019

Page 71: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

1Gb Intelligent L3 Switch

Features- Layer-3 switch module with RJ45/SFP module.- easy-to-use network configuration mode (SmartPanel)

Blade Enclosure (SIGMABLADE-M)

Switch Module / Pass-Through Card (LAN)

1Gb Intelligent L3 SwitchN8406-023A

Switching capacity 48GbpsMAC addressForwarding method

Forwarding rate

RJ-45(10Base-T/100Base-TX/1000BASE-T)

5 ports(Four ports are exclusive with SFP

modules)SFP module (1000BASE-SX) 4 ports (exclusive with RJ-45)

user port

Store and Forwardup to 8K address (per module)

1,488,095pps per 1portExternal port: Max. 7.4Mpps (5 x 1,488,095)

Model NameN-Code

performance

InterfaceSerial Port D-Sub 9 pin x1

Interconnect (modules) 1000BASE-X 2 portsInterconnect (CPU Blades) 1000BASE-X 16 ports* Layer 2 Switch function * jumbo frame (up to 9K)) * 802.1x* Layer 3 Switch function * port mirroring * RADIUS* VLAN (Port/Tag/Private) * AutoMDI/MDI-X * TACACS+* Spanning tree (STP, RSTP, MSTP, PVRST) * SNMP v1,v2c,v3* Link aggregation (static, LACP)) * IGMP snooping (v1, v2) * RMON (Group 1,2,3,9)* Trunk failover (original) * NTP client * routing protocol* QoS * DNS client (static, RIPv1, RIPv2, OSPF)* ACL (Access Control List) * syslog * VRRP

GUI Web Console

CLItelnet/sshConsole (serial (D-Sub 9pin connector))

Supported MIB

SNMPv1(RFC 1157)MIB-II(RFC 1213)BridgeMIB(RFC 1493)Interface MIB(RFC 2863)Ethernet MIB(RFC 1643)RMON v1(RFC 1757) Group 1,2,3,9

IEEE802.3 , IEEE802.3u , IEEE802.3ab , IEEE802.3ac , IEEE802.3ad , IEEE802.1D , IEEE802.1w , IEEE802.1s , IEEE802.1Q

ManagementFunctions

Management interface

Specifications

Features

Size 1 slot width

Up to 30W (DC)Power ConsumptionWeight 2kg

Page 72: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

A

1Gb Intelligent L3 Switch

Front View

ID LEDSerial Port

user port (RJ-45, SFP module)

Blade Enclosure (SIGMABLADE-M)

Switch Module / Pass-Through Card (LAN)

RESET SwitchSTATUS LED

Serial PortLINK/ACT LED SPEED LED

►If you need more LAN ports than provided as standard to expand your network connection, optional 1000BASE-T Adapter is required.►Optional cross cable (K410-84(05)) is recommended to configure 1Gb Intelligent L3 Switch.►If you need 1000BASE-SX, an optional 1000BASE-SX SFP module [N8406-024] is required.►The 1000BASE-SX port and the 1000BASE-T port are mutually exclusive.

Important

Page 73: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Switch Module / Pass-Through Card (LAN)

1Gb Interlink Expansion Card

Blade Enclosure (SIGMABLADE-M)

Features- Optional card to expand network connectionsat an affordable price- Interlink 1000BASE-T Adapterin the mezzanine slot 1 of CPU blades to port 9 through 16 of Intelligent switches in the switch module slot 1 and 2.

N8406-013Internal 1000BASE-X 16 ports

Size 2 slot width

1Gb Interlink Expansion CardA model nameN-CodeInterface

Weight 2kg

►The 1Gb Interlink Expansion Card can be installed in slot 3, 4 only.►Install Intelligent switches in the switch module slot 1 and 2 in the Blade Enclosure. The 1Gb Interlink Expansion Card can neither be used alone nor used simultaneously with the 1Gb Pass-Through Card.►Install the 1000BASE-T(2ch) Adapter in the mezzanine slot 1 in the CPU Blade.►1Gb Interlink Expansion Card can be connected to 1Gb Intelligent L3 switch only.

Important

1W(DC)Power ConsumptionWeight 2kg

Page 74: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Blade Enclosure (SIGMABLADE-M)

How to Use 1Gb Interlink Expansion Card

Switch ModuleSlot 5

Switch ModuleSlot 6

1Gb Interlink Expansion CardN8406-013

Intelligent Switch Intelligent Switch

The 1Gb Interlink Expansion Card is an optional board that makes available the 1Gb Intelligent Switch functionality at an affordable price when you expand your network connections using the 1000BASE-T(2ch) Adapter.The 1000BASE-T(2ch) Adapter is installed in the mezzanine slot 1 in the CPU Blade and connected via the 1Gb Interlink Expansion Card to the port 9 through 16 of the intelligent switches that are already installed in the switch module slot 1 and 2. The 1Gb Interlink Expansion Card enables you to interconnect the Adapter to the first two 1Gb Intelligent Switches, without purchasing an additional pair of intelligent switches for the slot 3 and 4.

Blade Enclosure (SIGMABLADE-M)

Switch Module / Pass-Through Card (LAN)

9-16 portsConnected internally

9-16 portsConnected internally

CPU Blade(Express5800/B120b) Embedded GbE

(for exclusive use of the LAN)

Mezzanine slot 11000BASE-T(2ch)

Adapter

Mezzanine slot 2(Type-1&2)

Port 1-8Port 1-8

Page 75: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

1:10Gb Intelligent L3 Switch

Features- Layer-3 switch module with 1Gbps RJ45 x4 and 10Gbps XFP slot x2.

Blade Enclosure (SIGMABLADE-M)

Switch Module / Pass-Through Card (LAN)

1:10Gb Intelligent L3 SwitchN8406-044

Switching capacity 122GbpsMAC addressForwarding method

Forwarding rate

RJ-45(10Base-T/100Base-TX/1000BASE-T)

4 ports

XFP module (1000BASE-SR) 2 portsSerial Port D-Sub 9 pin x1

user port

Store and Forwardup to 8K address (per module)

1Gbps port : 1,488,095pps per port, 10Gbps port : 14,880,950pps per portExternal port: Max. 48Mpps

Model NameN-Code

performance

Interface Serial Port D-Sub 9 pin x1Interconnect (modules) 10GBASE-X 1 portInterconnect (CPU Blades) 1000BASE-X 16 ports* Layer 2 Switch function * jumbo frame (up to 9K) * RADIUS* Layer 3 Switch function * port mirroring * TACACS+* VLAN (Port/Tag/Protocol/Private) * AutoMDI/MDI-X * SNMP v1,v2c,v3* Spanning tree(STP/RSTP/MSTP/PVRST) * IGMP snooping (v1, v2, v3) * RMON (group 1,2,3,9)* Link aggregation (static, LACP) * NTP client * routing protocol* Trunk failover (original) * DNS client (static, RIPv1, RIPv2, OSPF)* QoS * syslog * VRRP* ACL (Access Control List) * 802.1x

GUI Web Console

CLItelnet/sshConsole (serial (D-Sub 9pin connector))

Supported MIB

SNMPv1(RFC 1157)MIB-II(RFC 1213)BridgeMIB(RFC 1493)Interface MIB(RFC 2863)Ethernet MIB(RFC 1643)RMON v1(RFC 1757) Group 1,2,3,9

IEEE802.3 , IEEE802.3u , IEEE802.3ab , IEEE802.3ac , IEEE802.3ad , IEEE802.1D , IEEE802.1w , IEEE802.1s , IEEE802.1Q

ManagementFunctions

Management interface

Specifications

Features

Size 1 slot width

Up to 50W (DC)Power ConsumptionWeight 2kg

Page 76: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

RST19 2120

22

24

23

25

N8406-044

1:10Gb Intelligent L3 Switch

Front View

ID LEDSerial Port

user port (RJ-45, XFP module)

Blade Enclosure (SIGMABLADE-M)

Switch Module / Pass-Through Card (LAN)

RESET SwitchSTATUS LED

Serial Port

LI NK/ACT LED SPEED LED

►If you need more LAN ports than provided as standard to expand your network connection, optional 1000BASE-T Adapter is required.►Optional cross cable (K410-84(05)) is recommended to configure 1:10Gb Intelligent L3 Switch.►If you need 10GBASE-SR, an optional XFP module [N8406-027] is required.►The connection speed to CPU blades is 1Gbps (not 10Gbps).

Important

Page 77: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Switch Module / Pass-Through Card (LAN)

10Gb Intelligent L3 Switch

Blade Enclosure (SIGMABLADE-M)

Features- Layer-3 switch module with SFP+ module slot x8, 10Gb

10Gb Intelligent L3 SwitchN8406-051F

Switching capacity 480GbpsMAC address

Forwarding rate

user port SFP+ port8 ports

(4 ports are mutually exclusive withinterconnect (modules) ports)

Interconnect (modules) Internal port 4 ports(Mutually exclusive with external ports)

Interconnect (CPU Blades) Internal port 16 ports

Model NameN-Code

Interface

up to 128K address (per module)14.8Mpps per portExternal port Max: 118Mpps (8 x 14.8)

performance

Interconnect (CPU Blades) Internal port 16 ports* Layer 2 Switch function * jumbo frame (up to 9K) * 802.1x* Layer 3 Switch function * port mirroring * RADIUS* VLAN (Port/Tag/Protocol/Private) * TACACS+

* AutoMDI/MDI-X * SNMP v1,v2c,v3* Spanning tree(STP/RSTP/MSTP/PVRST) * RMON (group 1,2,3,9)* Link aggregation (static, LACP) * IGMP snooping (v1, v2, v3) * routing protocol* Trunk failover (original) * NTP client (static, RIPv1, RIPv2, OSPF)* QoS * DNS client * VRRP* ACL (Access Control List) * syslog

GUI Web console

CLItelnet/sshSerial console (connect with EM card)

Supported MIB

Size 1 slot width

Up to 100W (DC)Power ConsumptionWeight 2kg

Features

SNMPv1(RFC 1157)MIB-II(RFC 1213)BridgeMIB(RFC 1493)Interface MIB(RFC 2863)Ethernet MIB(RFC 1643)RMON v1(RFC 1757) Group 1,2,3,9

ManagementFunctions

Management interface

Page 78: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

10Gb Intelligent L3 Switch

Blade Enclosure (SIGMABLADE-M)

Switch Module / Pass-Through Card (LAN)

RST

-38- -39- -40- -41- -42- -43- -44- -45-

50-5346-49

N8406-051F

RESET Switch

ID LED LI NK LED

user port (SFP+port)

ACT LED

Front View

►For external network equipment connection, 1000BASE-SX SFP module (N8406-024), 10GBASE-SR SFP+ module (N8406-037), or 1000BASE-T SFP module (N8406-039) is required.►Please note that SFP modules are different for 1Gbps and 10Gbps.►For connection between 10Gb Intelligent L3 Switches, use an optional 10G SFP+ Copper Cable (K410-203(03)) or connect via backplane (adjacent slots only).►Connection with 10GbE Adapter (N8403-024) is not supported.►10BASE-T/100BASE-TX is not supported

STATUS LED

Important

Page 79: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

10Gb Intelligent L3 Switch

Blade Enclosure (SIGMABLADE-M)

Features- Layer-3 switch module with four ports of 10GBASE-SR as external ports. Both internal and external ports support 10Gbps speed.

Switch Module / Pass-Through Card (LAN)

10Gb Intelligent L3 SwitchN8406-026

Switching capacity

MAC addressForwarding method

10Gb XFP module slot 4 portsSerial Port D-Sub 9 pin x1

Interconnect (CPU Blades) 10Gbps 16 ports* Layer 3 Switch function * IGMP snooping (v1, v2,v3) QoS* VLAN (Port/Tag) * NTP client* Spanning tree (STP, RSTP, MSTP) * DNS client

10Gbps per port (20Gbps with full-duplex connection)non-blocking (full wire speed on all connection)

User port

Store and Forwardup to 8K address (per module)

Model NameN-Code

Performance

Interface

* Spanning tree (STP, RSTP, MSTP) * DNS client* Link aggregation (static) * syslog* Link aggregation (LACP) * RADIUS (Client)* Trunk failover (original) * TACACS+* jumbo frame (up to 9K) * SNMP v1,v2c,v3* port mirroring * RMON (Group 1,2,3,9)

GUI Web console

CLItelnet/sshConsole (serial (D-Sub 9pin connector))

Supported MIB

SNMPv1(RFC 1157)MIB-II(RFC 1213)BridgeMIB(RFC 1493)Interface MIB(RFC 2863)Ethernet MIB(RFC 1643)80.21Q extension bridge MIB (RFC 2674)Entity MIB (RFC 2037)RMON v1(RFC 1757) Group 1,2,3,9

IEEE802.3 , IEEE802.3u , IEEE802.3ab , IEEE802.3ac , IEEE802.3ad , IEEE802.1d ,802.1p, IEEE802.1w , IEEE802.1s ,IEEE802.1Q,802.3x,802.1x

Management Functions

Management interface

Specifications

Features

Size 2 slot width

Up to 70W (DC)Power ConsumptionWeight 3kg

Page 80: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

10Gb Intelligent L3 Switch

Front View

Blade Enclosure (SIGMABLADE-M)

Switch Module / Pass-Through Card (LAN)

STATUS LED

ID LEDSerial Port

LI NK LED

user port

ACT LED

An uplink port(XFP module program

unit slot)

►10GbE Adapter (N8403-024) is required for CPU Blade.►This switch is a two slot width switch and can be installed into switch module slots 5/6 only.►Optional cross cable (K410-84(05)) is recommended to configure 10Gb Intelligent L3 Switch.►10GBASE-SR XFP module (N8406-027) is necessary to connect to external network via uplink ports. ►This switch supports 10Gbps only. It can’t support 10/100/1000Base-T.

Important

RESET SwitchSTATUS LED ACT LED unit slot)

Page 81: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Switch Module / Pass-Through Card (LAN)

1Gb Pass-Through Card

Blade Enclosure (SIGMABLADE-M)

Features- Enables two LAN ports provided in the CPUBlade to be used for network connections* 2 ports per CPU Blade

N8406-011User port 1000BASE-T up to 16 ports (RJ-45)

Size 2 slot width

37W(DC)Power ConsumptionWeight 3kg

GbE Pass-Through CardProduct nameN-CodeInterface

Front View

LAN portSTATUS LED

ID LED

1 2

1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 RST

ID

RESET Switch

►The 1Gb Pass-Through Card enables both LAN ports provided in the CPU Blade to be directly connected to external network.►If you need more LAN ports than provided as standard to expand your network connection, 1000BASE-T (2ch) Adapter is required.►When you use a 1000BASE-T (4ch) Adapter, you cannot install this 1Gb Pass-Through Card. Be sure to use Intelligent Switch.►Only 1000BASE-T is supported for connection to an external network. (Does not support 10BASE-T/100BASE-TX)

Important

Page 82: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

10Gb Pass-Through Card

Features- Pass-Through card with internal/external 1Gb/10Gb support

Front View

Blade Enclosure (SIGMABLADE-M)

Switch Module / Pass-Through Card (LAN)

N8406-035User port

Size 1 slot width

10Gb Pass-Through Card

Interface

Model NameN-Code

Up to 8 ports

Up to 61W(DC)Power ConsumptionWeight 2kg

Front View

►To connect external network, choose 1000BASE-SX SFP module (N8406-024), 10GBASE-SR SFP+ module (N8406-037) or 1000BASE-T SFP module (N8406-039) to conform to the LAN port type of the blade .►Cannot be connected to 10GbE adaptor (N8403-024) on blade server.►To connect external devices, route through external switch►Does not support 10BASE-T/100BASE-TX.

Important

ID LEDSTATUS LED

Port STATUS LEDRESET Switch

LAN port

Page 83: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Switch Module / Pass-Through Card (FC)

8G FC switch (12 ports)

Features- 8G FC Switch Module

Blade Enclosure (SIGMABLADE-M)

N8406-040

Shared Memory F / FL / E port support

Size 1 slot width

Features

12(12)

8 / 4

Internal Architecture

* High Speed switching function* Auto routing function* Cascade connection of FC Switches* Zoning function

N-CodePort (Max)

(Non-Blocking)

Interlink Ports / External Ports

Port maximum transfer rate (Gb/s) 8

Front ViewID LED

STATUS LED

RESET SwitchActive LED

PORT STATUS LED

External Device Connection Ports(Open ports: port 17 to 20)

►When you use both ports of a Fibre Channel controller (2ch) in the CPU Blade, two FC Switch Modules must be installed in the Blade Enclosure.►Two FC SFP+ modules are attached as standard to connect external FC cables. To add SFP module, purchase 4/8G FC SFP+ module (N8406-041).►This product supports 4Gbps/8Gbps.

Important

Size 1 slot width

Up to 38W (DC)Power ConsumptionWeight 2kg

Page 84: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Switch Module / Pass-Through Card (FC)

2/4G FC Pass-Through Card

Blade Enclosure (SIGMABLADE-M)

Features- Connects a Fibre Channel controller (2ch) in the CPU Blade to an external network.* 2 ports per CPU Blade

N8406-021User port SFP up to 16 ports

24W(DC)Power ConsumptionWeight 3kgSize 2 slot width

4G FC Pass-Through CardProduct nameN-CodeInterface

Front View

Communication Ports

STATUS LED

ID LEDRESET Switch

RST

ID

1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8

►By using the FC Pass-Through Card, both ports of the Fibre Channel controller(2ch) become available for use.►To connect a FC cable, you need an optional FC SFP Module [N8406-015] for each port.Purchase the FC SFP Module for each port you use.►This product supports 2Gbps/4Gbps. Does not support 1Gbps.►Connection with external 8GB FC switch is not supported.

Important

Page 85: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Blade Enclosure Configuration

Blade Enclosure (SIGMABLADE-M)

SIGMABLADE-M

*8

CPU BladeBlade Slot

See "CPU Blade"* Slot Cover [N8405-032]* For efficient cooling, any vacant slot must be covered with a Slot Cover.

-CPU Blade consuming one blade slotCan be installed into any empty slotExpress5800/B120a, Express5800/B120a-dExpress5800/B120b, Express5800/B120b-dExpress5800/B120b-h, Express5800/120Bb-6Express5800/120Bb-d6

Slot

1

Slot

2

Slot

3

Slot

4

Slot

5

Slot

6

Slot

7

Slot

8

Slot

2

Slot

1

Slot

4

Slot

3

Slot

6

Slot

5

Slot

8

Slot

7

-CPU Blade consuming two blade slotCan be installed into paired slots as belowSlot 1 - 2, slot 3 - 4, slot 5 - 6, slot 7 - 8Express5800/120Bb-m6

Installing CPU Blades

EM Card

*2slot

Not installed as standard. At least one card is required. Redundancy is enabled by installing two cards.* EM Card [N8405-019A]

Express5800/120Bb-m6Express5800/B120a, B120a-d, B120b, B120b-d or B120b-h equipped with AD106a, AD106b and AT101a (e.g. : B120a in slot 7, AD106a in slot 8)

Slot

1

Slot

2

Slot

3

Slot

4

Slot

8

Slot

7

Slot

5

Slot

6

Mix of CPU Blades example

One-slot CPU Blade

Two-slot width CPU Blade

Page 86: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Blade Enclosure Configuration

Blade Enclosure (SIGMABLADE-M)

SIGMABLADE-M Power Unit

*4Power Slot

Not provided as standard, please be careful. Up to 4 units are allowed in the Blade Enclosure* Power Unit (2,250W, 80 PLUS® Gold) [N8405-055F]* Not provided as standard, please be careful.

See "Power Unit & Cooling Fan Configuration".* Redundant configuration is highly recommended.* Do not mix Power Unit of different type.

* AC200V

*6

Switch ModuleSwitch Slot See "Switch Module Configuration"

*5Not provided as standard, please be careful. Up to 5 units are allowed in the Blade Enclosure* Fan Unit [N8405-053]

* Mandatory Option. Please refer to "Power Unit & Cooling Fan Configuration" for more detail. Can be mixed with N8405-018.

Fan Slot

Fan Unit

Front BezelNot provided as standard.* Front Bezel [N8405-033]

Page 87: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Blade Enclosure (SIGMABLADE-M)

Display

*1displayInterface

*1

KeyboardInterface

Local Procurement

PS/2 Key-board

Local Procurement

PS/2 Mouse

[Connecting console devices to the enclosure directly]

Not attached to the Blade Enclosure. Purchase it separately if necessary.* Mouse [N8170-23]

SIGMABLADE-M

Mouse interface

Blade Enclosure Configuration

*1

Page 88: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Blade Enclosure (SIGMABLADE-M)

Blade Enclosure ConfigurationSIGMABLADE-M

x8

CPU BladeBlade Slot

SUV Interfacex1

1:Display I/F x12: USB I/F x2

SUV Cable* One cable is attached to SIGMABLADE-M as standard[K410-150(00)]

3:Serial I/F x1

Display

1×1

FDD

CPU Blade

SUV cable(60cm)

* Following configuration is recommended when you install the OS

(For maintenance)

[Connecting console devices to the blades directly]

2×2

Not provided as standard. Purchase it if necessary.* Flash FDD (USB) [N8160-86]- Capacity:1.44MB

Remarks Flash FDD is not provided as standard. For OS re-installation, purchase at least one Flash

FDD per system. For more information, please read the User’s Guide of Flash FDD. Flash FDD does not support disaster recovery functions of the OS and backup software.

Page 89: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Blade Enclosure (SIGMABLADE-M)

[Connecting the Blade Enclosure to a Console via a Server Switch Unit]

Up to eight servers can be connected. * Server switch unit [N8191-10F]

Server switch unit

Display interface

SIGMABLADE-M

Keyboard interface

Mouse interface

* Switch Unit Connection Cable & PS/2 cable set (1.8m /3m /5m)[K410-119(1A)] (1.8m)[K410-119(03)] (3m)[K410-119(05)] (5m)

Blade Enclosure Configuration

By configuring cascade connections of 8 SSUs, up to 64 servers can be connected.

Server switch unit (a cascade connection)

Cascade connection of SSUs (up to 64 SSUs, one layer only)* Switch Unit Connection Cable & PS/2 cable set (1.8m /3m /5m)

[K410-119(1A)] (1.8m)

Local Procurement

Local Procurement

Not attached to the Blade Enclosure. Purchase it separately if necessary.* Mouse [N8170-23]

PS/2 Key-board

Display

PS/2 Mouse

* display / key-board cable extension[K410-104A(02)] (2m)[K410-104A(03)] (3m)

* Server switch unit [N8191-10F]

Page 90: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-M [Connecting a Remote Console]

EM Card

EnclosureManagement LAN

[Management Server]

Blade Enclosure (SIGMABLADE-M)

Blade Enclosure Configuration

X 2* If you use one Enclosure Manager Card (EM Card), one cable is enough.

[EM Card Major Functions]* Remote KVM function* Virtual media function* Collection of Blade Enclosure status information(power units, cooling fans,

etc.)* CPU Blades status information is collected via NEC ESMPRO/ServerAgent.

[Recommended Specification for Management Server]* CPU : Intel Xeon/Pentium/Celeron family processor 1GHz or above* Memory: 512MB or more* HDD capacity: 350MB or more (separate disk space is required for data backup.)* LAN : 1 port* Serial: 1 port* Recommended OS: Windows Server 2008 R2* Browser: Internet Explorer 8.0 or later* If you use "SigmaSystemCenter", check the recommended specifications of

"SigmaSystemCenter".

- - -

L2 switch

1Gb Intelligent Switch

Management Server

SIGMABLADE-M

1Gb Intelligent Switch

ESMPRO/ServerAgent

ESMPRO/ServerAgent

ESMPRO/ServerAgent

EXPRESSSCOPE Engine

EXPRESSSCOPE Engine

EXPRESSSCOPE Engine

Blade EnclosureInformation (power units,cooling fans, etc.)

Send CPU Blade information

[SIGMABLADE Management LAN Connection Example]

ESMPRO/ServerManager

Collect memory and disk drive information

from a CPU Blade

EXPRESSSCOPE Engine

EM Card

Collect power unit andcooling fan information

fromthe Blade Enclosure

LANESMPRO/

ServerAgent

Dedicated for Remote KVM Function

Remote KVM function

Page 91: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Switch Module Configuration

Blade Enclosure (SIGMABLADE-M)

Switch Module Slot Positions

Switch ModuleSlot 5

Switch ModuleSlot 6

Switch ModuleSlot 3

Switch ModuleSlot 4

Switch ModuleSlot 1

Switch ModuleSlot 2

* The slots 1 and 2 are used for LAN connections. Select either 1Gb Intelligent Switch, 10Gb Intelligent L3 Switch (N8406-051F), 1Gb PassThrough Card or 10Gb PassThrough Card for these slots . Do not install FC Switch or FC Pass-Through Card in these slots.* If you use 1 slot width Switch Module, you need not install same products in adjacent slots on condition that they have same interface (e.g., FC, GbE). Though we recommend you to install same products to ease administration.

Blade Enclosure (SIGMABLADE-M)

1 slot width

2 slot width

CPU Blade (Express5800/B120a,B120a-d,Express5800/B120b,B120b-d, B120b-hExpress5800/120Bb-6, 120Bb-d6Storage and I/O Blade(AD106a, AD106b)Tape blade (AT101a)

Mezzanine slot 1(Type-1 only)

Mezzanine slot 2(Type-1&2)

Switch ModuleSlot 1

Switch ModuleSlot 2

On-board LANSwitch Module

Slot 3

Switch ModuleSlot 4

Connecting Mezzanine Slots to Switch Module Slots

Switch ModuleSlot 6

Switch ModuleSlot 5

P1

P2

P1 P2

* CPU Blades share the same switch module, identical mezzanine cards (with identical interface) must be installed in the same slot positions across all CPU Blades.e.g. When you install a pair of FC Switches in the switch module slots 3 and 4, you cannot use a 1000BASE-T adapter in the mezzanine slot 1.

P1

P2

P3

P4

Page 92: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Switch Module ConfigurationConnecting Mezzanine Slots to Switch Module Slots (with 10Gb(2ch) adapter N8403-024)

Blade Enclosure (SIGMABLADE-M)

CPU Blade(Express 5800/B120a,B120a-d, B120b, B120b-d)

Mezzanine slot 1(Type-1 only)

Mezzanine slot 2(Type-1&2)

Switch ModuleSlot 1

Switch ModuleSlot 2

On-board LANSwitch Module

Slot 3

Switch ModuleSlot 4

Switch ModuleSlot 5, 6

P1

P2

P1 P2

P1

P2

* Because two or more CPU Blades share the same switch module, identical mezzanine cards (with identical interface) must be installed in the same mezzanine slot positions across the CPU Blades.* 10GbE(2ch) mezzanine card (N8403-024) is Type2 product. You cannot install this card in Type1 slot.

Page 93: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-M

Connection to LAN

Connection to Switch Module slot 1,2

* 1Gb Intelligent L3 Switch [N8406-023A]Slot 1

Slot 2 * 1Gb Intelligent L3 Switch [N8406-023A]

►Connecting 1Gb Intelligent L3 Switch

Option for 1Gb Intelligent L3 Switch* 1000Base-SX SFP module [N8406-024]

Blade Enclosure (SIGMABLADE-M)

Switch Module Configuration

To LAN

To LAN

To LAN

To LAN

►Connecting 1:10Gb Intelligent L3 Switch

To LAN

► Switch module

* The 1000BASE-SX port and the 1000BASE-T port are mutually exclusive.

* 1:10Gb Intelligent L3 Switch [N8406-044]Slot 1

Slot 2 * 1:10Gb Intelligent L3 Switch [N8406-044]

Option for 1:10Gb Intelligent L3 Switch* 10GBASE-SR XFP module [N8406-027]

To LAN

To LAN

To LAN

To LAN

* To connect to an external device, an optional SFP module is required according to the interface to connect.* For connection between 10Gb Intelligent L3 Switches, use an optional 10G SFP+ Copper Cable (K410-203(03)) or connect via backplane (exclusive connection)

* 10Gb Intelligent L3 Switch (N8406-051F)Slot 1

Slot 2 * 10Gb Intelligent L3 Switch (N8406-051F)

►Connecting 1:10Gb Intelligent L3 Switch

Option for 10Gb Intelligent L3 Switch* 10GBASE-SR SFP+ module [N8406-037]* 1000BASE-T SFP module [N8406-039]* 1000Base-SX SFP module [N8406-024]* 10G SFP+ Copper Cable [K410-203(03)]

To LAN

To LAN

To LAN

To LAN

Page 94: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-M

►Connecting 1Gb Pass-Through Card

Slot 1

Slot 2

Connection to LAN

To LAN* 1Gb Pass-Through Card [N8406-029]

* 1Gb Pass-Through Card [N8406-029]

Connection to Switch Module slot 1,2

Blade Enclosure (SIGMABLADE-M)

Switch Module Configuration

To LAN

► Pass Through Card

►Connecting 10Gb Pass-Through Card

Slot 1 To switch* 10Gb Pass-Through Card [N8406-035]

Slot 2 * 10Gb Pass-Through Card [N8406-035] To switch

Selection mandatory option for 10Gb Pass-Thorough card * 10GBASE-SR SFP+ module [N8406-037]* 1000BASE-T SFP module [N8406-039]* 1000Base-SX SFP module [N8406-024]

* To connect 10Gb pass-through card to external network, choose 1000BASE-SX SFP module (N8406-024), 10GBASE-SR SFP+ module (N8406-037) or 1000BASE-T SFP module (N8406-039) to conform to the LAN port type of the blade .* To use with 10Gb speed, select 10GBASE-SR SFP+ module.* To use with 1Gb speed, select 1000BASE-T SFP module or 1000BASE-SX SFP module.* To connect external devices, route through external switch

Page 95: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-M

Connection to LAN

Connection to Switch Module slot 3,4

* 1Gb Intelligent L3 Switch [N8406-023A]Slot 3

Slot 4 * 1Gb Intelligent L3 Switch [N8406-023A]

►Connecting 1Gb Intelligent L3 Switch

Option for 1Gb Intelligent L3 Switch* 1000Base-SX SFP module [N8406-024]

Blade Enclosure (SIGMABLADE-M)

Switch Module Configuration

To LAN

To LAN

To LAN

To LAN

* 1:10Gb Intelligent L3 Switch [N8406-044]Slot 3

►Connecting 1:10Gb Intelligent L3 Switch

To LAN

► Switch module

* The 1000BASE-SX port and the 1000BASE-T port are mutually exclusive.

* 1:10Gb Intelligent L3 Switch [N8406-044]Slot 3

Slot 4 * 1:10Gb Intelligent L3 Switch [N8406-044]

Option for 1:10Gb Intelligent L3 Switch* 10GBASE-SR XFP module [N8406-027]

To LAN

To LAN

To LAN

* To connect to an external device, an optional SFP module is required according to the interface to connect.* For connection between 10Gb Intelligent L3 Switches, use an optional 10G SFP+ Copper Cable (K410-203(03)) or connect via backplane (exclusive connection)

* 10Gb Intelligent L3 Switch (N8406-051F)Slot 1

Slot 2 * 10Gb Intelligent L3 Switch (N8406-051F)

►Connecting 1:10Gb Intelligent L3 Switch

Option for 10Gb Intelligent L3 Switch* 10GBASE-SR SFP+ module [N8406-037]* 1000BASE-T SFP module [N8406-039]* 1000Base-SX SFP module [N8406-024]* 10G SFP+ Copper Cable [K410-203(03)]

To LAN

To LAN

To LAN

To LAN

Page 96: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Switch Module Configuration

SIGMABLADE-M

Switch Module Slot connection to 3,4

Blade Enclosure (SIGMABLADE-M)

Connection to LAN

Slot 3

Slot 4

Slot 1

Slot 2

* 1Gb Intelligent L3 Switch [N8406-023A]To LAN

To LAN* 1Gb Intelligent L2 Switch [N8406-023A]

* GbE Expansion Card [N8406-013]

Connected internally

Connected internally

►1Gb Interlink Expansion Card

► Pass Through Card►Connecting 1Gb Pass-Through Card

Slot 3

Slot 4

To LAN

* 1Gb Pass-Through Card [N8406-011]

To LAN

► Pass Through Card

►Connecting 10Gb Pass-Through Card

Slot 3

Slot 4

To switch* 10Gb Pass-Through Card [N8406-035]

* 10Gb Pass-Through Card [N8406-035] To switch

Selection mandatory option for 10Gb Pass-Thorough card * 10GBASE-SR SFP+ module [N8406-037]* 1000BASE-T SFP module [N8406-039]* 1000Base-SX SFP module [N8406-024]

* To connect 10Gb pass-through card to external network, choose 1000BASE-SX SFP module (N8406-024), 10GBASE-SR SFP+ module (N8406-037) or 1000BASE-T SFP module (N8406-039) to conform to the LAN port type of the blade .* To use with 10Gb speed, select 10GBASE-SR SFP+ module.* To use with 1Gb speed, select 1000BASE-T SFP module or 1000BASE-SX SFP module.* To connect external devices, route through external switch

Page 97: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Connection to Switch Module slot 3,4

SIGMABLADE-M Connecting Fibre Channel Modules

Blade Enclosure (SIGMABLADE-M)

Switch Module Configuration

Slot 3

Slot 4

Fibre Channel cable (set of two)5m [NF9320-SJ01E]10m[NF9320-SJ02]20m[NF9320-SJ03]50m[NF9320-SJ04]

►Connecting 8G FC Switch

* 8G FC switch (12 ports) [N8406-040]- Two SFP+ modules are attached as standard

* 8G FC switch (12 ports) [N8406-040]- Two SFP+ modules are attached as standard

Optional FC SFP+ Module[N8406-041] for 8G FC Switch

NEC Storage D1/D3/D4/D8/E1/M

* For NEC Storage products, refer to the configuration guide of NEC Storage or NEC Storage WEB site. (http://www.nec.com/global/prod/storage/) 50m[NF9320-SJ04](http://www.nec.com/global/prod/storage/)

* Connection to External FC Switch [N8190-119] is not supported.

Page 98: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Connection to Switch Module slot 3,4

SIGMABLADE-M

Slot 3

Slot 4

Connecting Fibre Channel Modules

►Connecting FC Through Card

* 2/4G FC Pass-Through Card [N8406-021]- SFP modules are required

Optional FC SFP Module[N8406-015] for FC Pass-Through Card

NEC StorageS500/S550/S1500/S2500/

S2900/D1/D3/D8/E1/M

Blade Enclosure (SIGMABLADE-M)

Switch Module Configuration

External FC switchNF9330-SS07(8 port)NF9330-SS08(16 port)NF9330-SS011(16 port)NF9330-SS012(32 port)NF9330-SS013(8 port)NF9330-SS014(16 port)NF9330-SS015(24 port)

Fibre Channel cable (set of two)5m [NF9320-SJ01E]10m[NF9320-SJ02]20m[NF9320-SJ03]50m[NF9320-SJ04]

* No FC SFP Modules are attached to 2/4G FC Pass-Through Card as standard. * To connect FC cables, you need optional FC SFP Module. Same number of SFP modules as FC ports are required.

e.g. If you have 5 CPU blades equipped with FC controller (2ch), you need to purchase 10 FC SFP modules for this 2/4G FC Pass-Through Card. * Connection to External FC Switch [N8190-119] is not supported.* Connection to NF9340-SS017/SS018/SS019/SS024/SS025/SS026 external 8GB FC switch is not supported.

NF9330-SS015(24 port)NF9330-SS016(40 port)NF9330-SS22(8 port)NF9330-SS23(8 port)NF9320-SS21(8 port)NF9320-SS06(16 port)

* For NEC Storage products, refer to the configuration guide of NEC Storage or NEC Storage WEB site. (http://www.nec.com/global/prod/storage/)

Page 99: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-M

Connection to LAN

Connection to Switch Module slot 5,6

* 1Gb Intelligent L3 Switch [N8406-023A]Slot 5

Slot 6 * 1Gb Intelligent L3 Switch [N8406-023A]

►Connecting 1Gb Intelligent L3 Switch

Option for 1Gb Intelligent L3 Switch* 1000Base-SX SFP module [N8406-024]

Blade Enclosure (SIGMABLADE-M)

Switch Module Configuration

To LAN

To LAN

To LAN

To LAN

* 1:10Gb Intelligent L3 Switch [N8406-044]Slot 5

►Connecting 1:10Gb Intelligent L3 Switch

To LAN

► Switch module

* The 1000BASE-SX port and the 1000BASE-T port are mutually exclusive.

* 1:10Gb Intelligent L3 Switch [N8406-044]Slot 5

Slot 6 * 1:10Gb Intelligent L3 Switch [N8406-044]

Option for 1:10Gb Intelligent L3 Switch* 10GBASE-SR XFP module [N8406-027]

To LAN

To LAN

To LAN

* 10Gb Intelligent L3 Switch (N8406-051F)Slot 1

Slot 2 * 10Gb Intelligent L3 Switch (N8406-051F)

►Connecting 1:10Gb Intelligent L3 Switch

Option for 10Gb Intelligent L3 Switch* 10GBASE-SR SFP+ module [N8406-037]* 1000BASE-T SFP module [N8406-039]* 1000Base-SX SFP module [N8406-024]* 10G SFP+ Copper Cable [K410-203(03)]

To LAN

To LAN

To LAN

To LAN

* To connect to an external device, an optional SFP module is required according to the interface to connect.* For connection between 10Gb Intelligent L3 Switches, use an optional 10G SFP+ Copper Cable (K410-203(03)) or connect via backplane (exclusive connection)

Page 100: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SIGMABLADE-M

Connection to LAN

Connection to Switch Module slot 5,6

Blade Enclosure (SIGMABLADE-M)

Switch Module Configuration

► Pass Through Card

Slot 5

Slot 6

Option for 10Gb Intelligent L3 Switch* 10GBASE-SR XFP module [N8406-027]

* 10Gb Intelligent L3 Switch [N8406-026] To LAN

* Up to one 10Gb Intelligent L3 Switch [N8406-026] can be installed into SIGMABLADE-M.* 10Gb intelligent L3 Switch [N8406-026] can be installed into Switch Module slot 5,6. (one 10GbE switch consumes two slots).* Optional 10GBase-SR XFP module (N8406-027) is necessary for connection with external network (up to 4 ports)* 10GbE adapter N8403-024 is required to be installed in the CPU blade.

►Connecting 1Gb Pass-Through Card

Slot 5

Slot 6

To LAN

* 1Gb Pass-Through Card [N8406-011]

To LAN

►Connecting 10Gb Pass-Through Card

Slot 5

Slot 6

To switch* 10Gb Pass-Through Card [N8406-035]

* 10Gb Pass-Through Card [N8406-035] To switch

Selection mandatory option for 10Gb Pass-Thorough card * 10GBASE-SR SFP+ module [N8406-037]* 1000BASE-T SFP module [N8406-039]* 1000Base-SX SFP module [N8406-024]

* To connect 10Gb pass-through card to external network, choose 1000BASE-SX SFP module (N8406-024), 10GBASE-SR SFP+ module (N8406-037) or 1000BASE-T SFP module (N8406-039) to conform to the LAN port type of the blade .* To use with 10Gb speed, select 10GBASE-SR SFP+ module.* To use with 1Gb speed, select 1000BASE-T SFP module or 1000BASE-SX SFP module.* To connect external devices, route through external switch

Page 101: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Connection to Switch Module slot 5,6

SIGMABLADE-M Connecting Fibre Channel Modules

Blade Enclosure (SIGMABLADE-M)

Switch Module Configuration

Slot 5

Slot 6

Fibre Channel cable (set of two)5m [NF9320-SJ01E]10m[NF9320-SJ02]20m[NF9320-SJ03]50m[NF9320-SJ04]

►Connecting 8G FC Switch

* 8G FC switch (12 ports) [N8406-040]- Two SFP+ modules are attached as standard

* 8G FC switch (12 ports) [N8406-040]- Two SFP+ modules are attached as standard

Optional FC SFP+ Module[N8406-041] for 8G FC Switch

NEC Storage D1/D3/D4/D8/E1/M

* For NEC Storage products, refer to the configuration guide of NEC Storage or NEC Storage WEB site. (http://www.nec.com/global/prod/storage/) 50m[NF9320-SJ04](http://www.nec.com/global/prod/storage/)

* Connection to External FC Switch [N8190-119] is not supported.

Page 102: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Connection to Switch Module slot 5,6

SIGMABLADE-M

Slot 5

Slot 6

Connecting Fibre Channel Modules

►Connecting FC Through Card

* 2/4G FC Pass-Through Card [N8406-021]- SFP modules are required

Optional FC SFP Module[N8406-015] for FC Pass-Through Card

NEC StorageS500/S550/S1500/S2500/

S2900/D1/D3/D8/E1

Blade Enclosure (SIGMABLADE-M)

Switch Module Configuration

External FC switchNF9330-SS07(8 port)NF9330-SS08(16 port)NF9330-SS011(16 port)NF9330-SS012(32 port)NF9330-SS013(8 port)NF9330-SS014(16 port)NF9330-SS015(24 port)

Fibre Channel cable (set of two)5m [NF9320-SJ01E]10m[NF9320-SJ02]20m[NF9320-SJ03]50m[NF9320-SJ04]

* No FC SFP Modules are attached to 2/4G FC Pass-Through Card as standard. * To connect FC cables, you need optional FC SFP Module. Same number of SFP modules as FC ports are required.

e.g. If you have 5 CPU blades equipped with FC controller (2ch), you need to purchase 10 FC SFP modules for this 2/4G FC Pass-Through Card. * Connection to External FC Switch [N8190-119] is not supported.* Connection to NF9340-SS017/SS018/SS019/SS024/SS025/SS026 external 8GB FC switch is not supported.

NF9330-SS015(24 port)NF9330-SS016(40 port)NF9330-SS22(8 port)NF9330-SS23(8 port)NF9320-SS21(8 port)NF9320-SS06(16 port)

* For NEC Storage products, refer to the configuration guide of NEC Storage or NEC Storage WEB site. (http://www.nec.com/global/prod/storage/)

Page 103: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Power Unit & Cooling Fan Configuration

Power Unit

Blade Enclosure (SIGMABLADE-M)

The numbers of power units and cooling fans required depend on the number and type of CPU Blades installed.

Total DC power consumption

Number of power units(N+1 redundancy)

Number of power units(N+N redundancy) Remarks

Less than 2,244W 2 units 2 units Not provided as standard

■ 2,250W Power Unit (N8405-055F)

Calculate a total DC power consumptionTotal DC power consumption (W) =Total DC power consumption per CPU Blade (W) +560 (W) (Blade Enclosure Power Consumption *)

* Blade Enclosure Power Consumption includes the power consumption of switch modules and cooling fans installed.

For more information about Total DC power consumption per CPU Blade, see "CPU Blade" of this document.

See the tables below for the number of power units required for your total DC power consumption.In addition, please be careful about the position. (* please refer to the list shown below)

Less than 2,244W 2 units 2 units Not provided as standardUse Power Unit slot 1,3

Less than 4,260W 3 units 4 unitsNot provided as standardUse Power Unit slot 1,2,3 (When N+1)

* To calculate AC power consumption and apparent power, use this formula.

AC power consumption (W) = Total DC (12V) power consumption (W) / 0.88

Apparent power (VA) = AC power consumption (W) / 0.98

Page 104: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Power Unit & Cooling Fan Configuration

Blade Enclosure (SIGMABLADE-M)

Fan Unit

Note:- You must install the CPU Blades from the left toward the right in the enclosure.- See the tables below for the number of cooling fans required for your system.- The fans installed provide cooling to the CPU Blade slots as shown in the tables.- You can speed down fan rotations and minimize the noises by installing five fans instead of the required number.

Note for Express5800/120Bb-m6 CPU BladeExpress5800/120Bb-m6 uses two Blade slots.(please refer to the list shown below)

The numbers of cooling fans required depend on the number and type of CPU Blades installed.

Number of CPU Blades

installed

Number of fansrequired

(N+1 redundancy)Remarks

1 or 2 3 units not provided as standard

3 to 5 4 units not provided as standard

6 or more 5 units not provided as standard

■ B120a, B120a-d, B120b, B120b-d, B120b-h, AD106a, AD106b, AT101a

Page 105: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

2. CPU Blade

Page 106: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Express5800/B120b

CPU Blade (Express5800/B120b)

ICON The icons on the configuration chart stand for the supported OS as shown in the following table. For the request to use Linux OS,please also refer to the website

http://www.nec.com/global/prod/express/linux/index.html or contact a sales representative.

・・・ Supported・・・ Certified by Distributor

2008R2

2003R22003R2x64

2003x64

:Windows Server 2008 R2(x64)

:Windows Server 2003,x64 Edition:Windows Server 2003 R2:Windows Server 2003 R2,x64 Edition

20082008x64

:Windows Server 2008:Windows Server 2008(x64)

ESXi4.1 :VMware ESXi 4.1

2003 :Windows Server 2003(with SP1 or later) EL4 : Red Hat Enterprise Linux ES4/AS4EL4x64 : Red Hat Enterprise Linux ES4(EM64T)/AS4(EM64T)

EL5x64: Red Hat Enterprise Linux 5/AP5EL5: Red Hat Enterprise Linux 5(EM64T)/AP5(EM64T)

EL6x64: Red Hat Enterprise Linux 6EL6: Red Hat Enterprise Linux 6(x86_64)

Page 107: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b)Express5800/B120b

High Performance Blade server featuring Xeon processors

Features- Xeon- X5670 / X5650 / L5640 / E5645 / E5606 processors- Memory: Max.128GB (DDR3)- 1000BASE-X ports x2 as standards

Express5800/B120bN8400-113F N8400-112F N8400-114F N8400-111F N8400-110F

CPU Intel XeonE5606

Intel XeonE5645

Intel XeonL5640

Intel XeonX5650

Intel XeonX5670

2.13GHz 2.4GHz 2.26GHz 2.66GHz 2.93GHz4.8GT/s 5.86GT/s 6.4GT/s

L3 cache 8MB 12MB2 (1) /4 core 2 (1) /6 core

Intel 64 SupportIntel Virtualization Technology SupportEnhanced Intel SpeedStep technology SupportIntel HyperThreading technology SupportIntel Turbo Boost technology Support

Intel 5500Memory DDR3-1066/1333 Registerd DIMM x4SDDC with the ECC, lockstep mode (x8SDDC), memory mirroring *1

Speed Registered DIMM 1066MHz/1333MHzStandard None (mandatory option)Maximum 128GB (16GB x8)

Storage Standard Diskless(Up to 2 drives, select from SAS-HDD 73.2GB /146.5GB/300GB/600GB/900GB or SATA-HDD 160GB/500GB/1TB )

Max. 2TB(SATA-HDD 1TB x2)HOT PLUG SupportDisk Controller SAS / SATA

Model nameN-Code

InternalHDD

Clock Frequency

Maximum (standard) CPU/core number per CPU

Chipset

Intel QuickPath interconnect

Capacity

* A set of console devices are required for each system.

*1: RPQ is required to enable RAS functions for memory. In addition, all memory modules should be 2GB/4GB/8GB/16GB to use x4 SDDC function.*2: Not supported if clean-installed by DeploymentManager. (DeploymentManager is not bundled)*3: On-board disk array is not supported.*4: For the request to use Linux OS, please also refer to the website http://www.nec.com/global/prod/express/linux/index.html or regional sales support contact person.*5: Service Pack2 (SP2) or later is required.

Disk Controller SAS / SATARAID 0,1

Slot 2[2]1[1] (selection mandatory option)

Type-1 slot x1[1] , Type-2 slot x1[1] (Type-1 supported)Display function Chip set built-in (VRAM: 32MB)

640*480 (up to 16,770,000 colors), 800*600 (up to 16,770,000 colors),1,024*768 (up to 16,770,000 colors), 1,280*1,024 (up to 16,770,000 colors)

1000BASE-X (connected to midplane) x2SUV (Serial x1, VGA x1, USB x2) x1 (When a SUV cable is connected)

6kg51.6*515.4*180.7

SIGMABLADE-M , SIGMABLADE-H v2Max PowerConsumption

301W(DC) 369W(DC) 332W(DC) 414W(DC) 435W(DC)

3,573W(AC) / 3646VA 4,191W(AC) / 4,276VA 3,855W(AC) / 3,933VA 4,600W(AC) / 4,694VA 4,791W(AC) / 4,889VA7,068W(AC) / 7,212VA 8,318W(AC) / 8,488VA 7,638W(AC) / 7,794VA 9,146W(AC) / 9,333VA 9,532W(AC) / 9,727VA

Operation time: During operation: 10 to 35C / 20 to 80% (Non-condensing)When stored: -10 to 55C / 20 to 80% (Non-condensing)

NEC EXPRESSBUILDER (including NEC ESMPRO/Manager/Agent)Support OS

Windows

Microsoft Windows Server 2003 R2 Standard Edition *2, *5 / Enterprise Edition *2, *5Microsoft Windows Server 2003 R2 Standard x64 Edition *2, *5 / Enterprise x64 Edition *2, *5

Microsoft Windows Server 2008 Standard *2 / Enterprise *2Microsoft Windows Server 2008 Standard (x64) *2/ Enterprise (x64) *2Microsoft Windows Server 2008 R2 Standard *2,*3 / Enterprise *2,*3

Linux

Red Hat Enterprise Linux ES4 *3,*4 / Red Hat Enterprise Linux ES4(EM64T) *3,*4Red Hat Enterprise Linux AS4 *3,*4 / Red Hat Enterprise Linux AS4(EM64T) *3,*4

Red Hat Enterprise Linux 5 *3,*4 / Red Hat Enterprise Linux 5(EM64T) *3,*4Red Hat Enterprise Linux Advanced Platform 5 *3,*4

Red Hat Enterprise Linux Advanced Platform 5(EM64T) *3,*4Red Hat Enterprise Linux 6 (x86) *3,*4

Red Hat Enterprise Linux 6 (x86_64) *3,*4

Graphic acceleratorMezzanine slot [open]

SIGMABLADE-M

Supported Enclosure

Interface

Weight (maximum)

Temperature / Humidity

Dimension (WxDxH mm)

Accessories

Resolution

RAID support

SIGMABLADE-H v2

per CPU Blade

Disk interface slot [open]2.5" Disk Bays [open]

Page 108: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b)

Express5800/B120b Quick Sheet

Express5800/B120b

CPU socket, Memory Slot

Expansion Slot

Mezzanine slot 1

HDD Slot

Open

Open

Disk I/F

RAID controller

Disk

XeonX5670 / X5650 / L5640 / E5645 /

E5606

Additional memory

Additional memory

Additional memory

Mandatory option

With RAID controller* 146.5GB HDD [N8450-023] (2.5" SAS 10,000rpm, Carrier provided)* 300GB HDD [N8450-024] (2.5" SAS 10,000rpm, Carrier provided)* 600GB HDD [N8450-030] (2.5" SAS 10,000rpm, Carrier provided)* 900GB HDD [N8450-031] (2.5" SAS 10,000rpm, Carrier provided)* 73.2GB HDD [N8450-025] (2.5" SAS 15,000rpm, Carrier provided)* 146.5GB HDD [N8450-026] (2.5" SAS 15,000rpm, Carrier provided)* 300GB HDD [N8450-038] (2.5" SAS 15,000rpm, Carrier provided)* 160GB HDD [N8450-028] (2.5" SATA 7,200rpm, Carrier provided)* 500GB HDD [N8450-029] (2.5" SATA 7,200rpm, Carrier provided)* 1TB HDD [N8450-037] (2.5" SATA 7,200rpm, Carrier provided)SAS,SATA-HDD cannot be mixed.

With SATA interface card* 160GB HDD [N8450-028] (2.5" SATA 7,200rpm, Carrier provided)* 500GB HDD [N8450-029] (2.5" SATA 7,200rpm, Carrier provided)* 1TB HDD [N8450-037] (2.5" SATA 7,200rpm, Carrier provided)

slot 1

Following products are DDR3-1,333MHz Registered type* Additional 2GB memory module set [N8402-075F] (2GB x1)* Additional 4GB memory module set [N8402-076F] (4GB x1)* Additional 8GB memory module set [N8402-061F] (8GB x1)

Following products are DDR3-1,066MHz Registered type* Additional 16GB memory module set [N8402-048F] (16GB x1)

Memory (DDR3)

Mezzanine Cards for Mezzanine Slots

* CPU Kit (Xeon X5670(2.93GHz)) [N8401-052F]* Can be installed on N8400-110F

* CPU Kit (Xeon X5650(2.66GHz)) [N8401-055F]* Can be installed on N8400-111F

* CPU Kit (Xeon L5640(2.26GHz)) [N8401-053F]* Can be installed on N8400-114F

* CPU Kit (Xeon E5645(2.4GHz)) [N8401-056F]* Can be installed on N8400-112F

* CPU Kit (Xeon E5606(2.13GHz)) [N8401-057F]* Can be installed on N8400-113F

CPU

SIGMABLADE-H v2,MCPU Blade slot

Mezzanine slot 2 SATA interface

card

alternative

* RAID controller [N8403-026]* SATA interface card [N8403-027]* Selection mandatory option

To use SAS-HDD, RAID controller is necessary.

Additional memory

Additional memory

Additional memory

Additional memory

Open

* 1000BASE-T(2ch) adapter [N8403-017]* 1000BASE-T(4ch) adapter [N8403-020] *1* 1000BASE-T(2ch) adapter (iSCSI support) [N8403-021]* 1000BASE-T(4ch) adapter (iSCSI support) [N8403-022] *1* 10GbE(2ch) adapter [N8403-024] * 10GBASE-KR adapter [N8403-035]* Fibre Channel controller (4Gbps/2ch) [N8403-018] *2* Fibre Channel controller (8Gbps/2ch) [N8403-034] *2

*1: only installed in the type 2 mezzanine slot *2: N8403-018 and N8403-034 can not be mixed

NoteTo install new options, please update BIOS, firmware, driver and EM firmware.

Page 109: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b)

Express5800/B120b basic configuration

* N8405-016BFSIGMABLADE-M

Dimensions:Width (mm): 484.8Depth (mm): 829Height (mm): 264.2

(6U)* Protruding objects included

* N8400-110F Express5800/B120b (6C/X5670)Xeon processor X5670(2.93GHz)x1, 12MB 3rd cache, QuickPath interconnect 6.4GT/s,Memory selectable (required selection from 2GB /4GB /8GB /16GB), Mezzanine slot x2,2.5" SAS/SATA support, HDD selectable (selection from 73.2GB/146.5GB/300GB/600GB/900GB/160GB/500GB/1TB)1000BASE-X port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

XeonX5670(2.93GHz)

MemoryECC

Max.128GB1000BASE-X

(on board)x2

* Intelligent Switch or

* N8405-040AFSIGMABLADE-H v2

Dimensions:Width (mm): 483Depth (mm): 823Height (mm): 442

(10U)* Protruding objects included

* N8400-114F Express5800/B120b (6C/L5640)

EXPRESSBUILDER

ESMPRO /Manager,Agent

・ N8400-111F Express5800/B120b (6C/X5650)Xeon processor X5650(2.66GHz)×1, 12MB 3rd cache, QuickPath interconnect 6.4GT/s,Memory selectable (required selection from 2GB/4GB/8GB/16GB), Mezzanine slot x2,2.5” SAS/SATA support, HDD selectable (selection from 73.2GB/146.5GB/300GB/600GB/900GB/160GB/500GB/1TB)1000BASE-X port x2 (remote wake-up function supported)NEC EXPRESSBUILDER (including NEC ESMPRO/Manager/Agent)

XeonX5650(2.66GHz)

MemoryECC

Max.128GB1000BASE-X

(on board)

X2

EXPRESSBUILDER

ESMPRO /Manager,Agent

EL4 EL4x642003R2 2003R2x64 EL5x64EL52008x6420082008 R2 EL6x64■EL6 ■

EL4 EL4x642003R2 2003R2x64 EL5x64EL52008x6420082008 R2 EL6x64■EL6 ■

EL4 EL4x642003R2 2003R2x64 EL5x64EL52008x6420082008 R2 EL6x64■EL6 ■

* Remote wake up function allows the CPU Blades to be powered on from a remote console via LAN.* To keep cooling efficiency, any vacant slots in the Blade Enclosure must be covered with slot covers.

* Intelligent Switch or Pass-Through Cardis required

* N8400-114F Express5800/B120b (6C/L5640)Xeon processor L5640(2.26GHz)x1, 12MB 3rd cache, QuickPath interconnect 5.86GT/s,Memory selectable (required selection from 2GB /4GB /8GB /16GB), Mezzanine slot x2,2.5" SAS/SATA support, HDD selectable (selection from 73.2GB/146.5GB/300GB/600GB/900GB/160GB/500GB/1TB)1000BASE-X port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

XeonL5640(2.26GHz)

MemoryECC

Max.128GB1000BASE-X

(on board)x2

EXPRESSBUILDER

ESMPRO /Manager,Agent

・ N8400-112F Express5800/B120b (6C/E5645)Xeon processor E5645(2.4GHz)×1 , 12MB 3rd cache, QuickPath interconnect 5.86GT/s,Memory selectable (required selection from 2GB/4GB/8GB/16GB), Mezzanine slot x2,2.5" SAS/SATA support, HDD selectable (selection from73.2GB/146.5GB/300GB/600GB/900GB/160GB/500GB/1TB)1000BASE-X port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

XeonE5645(2.4GHz)

MemoryECC

Max.128GB1000BASE-X

(on board)

X2

EXPRESSBUILDER

ESMPRO /Manager,Agent

EL4 EL4x642003R2 2003R2x64 EL5x64EL52008x6420082008 R2

・ N8400-113F Express5800/B120b (4C/E5606)Xeon processor E5606(2.13GHz)×1 , 8MB 3rd cache, QuickPath interconnect 4.8GT/s,Memory selectable (required selection from 2GB/4GB/8GB/16GB) , Mezzanine slot x2,2.5" SAS/SATA support, HDD selectable (selection from73.2GB/146.5GB/300GB/600GB/900GB/160GB/500GB/1TB)1000BASE-X port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

XeonE5606(2.13GHz)

MemoryECC

Max.128GB1000BASE-X

(on board)

X2

EXPRESSBUILDER

ESMPRO /Manager,Agent

EL4 EL4x642003R2 2003R2x64 EL5x64EL52008x6420082008 R2

EL6x64■EL6 ■

EL6x64■EL6 ■

Page 110: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

1

2

ID

0

CPU Blade (Express5800/B120b)

Express5800/B120b( base machine)

Front ViewPower LED

Power Switch

STATUS LED

Dump switch

Reset switch

Ether1 Link/Access LED

Ether2 Link/Access LED

ID Switch

ID LED

SUV connectorUSB, VGA

HDD Slot

HDD Slot

1

External View

Page 111: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b)

CPU Blade ConfigurationB120b

x8

Not provided as standard. Up to 4 memory modules (64GB) can be installed per CPU. Additional CPU is required to install 5 or more memory modules.Up to 8 memory modules (128GB) can be installed in 2CPU configuration.For more details about the memory performance and RAS functions, please refer to the end of this document (RPQ is required to enable RAS functions.)

CPU

CPU slot

x21 CPU is installed as standard. Up to 2 processors can be installed.N8400-110F supports following CPU* CPU Kit (Xeon X5670(2.93GHz)) [N8401-052F]N8400-111F supports following CPU* CPU Kit (Xeon X5650(2.66GHz)) [N8401-055F] N8400-114F supports following CPU* CPU Kit (Xeon L5640(2.26GHz)) [N8401-053F]N8400-112F supports following CPU* CPU Kit (Xeon E5645(2.4GHz)) [N8401-056F]N8400-113F supports following CPU* CPU Kit (Xeon E5606(2.13GHz)) [N8401-057F]

Memory (Registered type)

Memory Slot

(RPQ is required to enable RAS functions.)For more details about the memory size supported by each OS, please refer to the next page of this document.

* 2GB Additional Memory module(2GB x 1) [N8402-075F]* 4GB Additional Memory module(4GB x 1) [N8402-076F]* 8GB Additional Memory module(8GB x 1) [N8402-061F]* 16GB Additional Memory module(16GB x 1) [N8402-048F]

Important* Up to 4 memory modules (64GB) can be installed per CPU.Additional CPU is required to install 5 or more memory modules.

Page 112: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b)

CPU Blade Configuration

OS

The maximum of the OSSupportMemory:

The biggest available memory capacity(including for OS application)

Microsoft Windows Server 2003 R2, Standard EditionMicrosoft Windows Server 2008 Standard 4GB

* HW-DEP disabled : About 3.3GB

* HW-DEP enabled : 4GBExecute Disable Bit (XD Bit) is enabled by

default.

Microsoft Windows Server 2003 R2, Enterprise EditionMicrosoft Windows Server 2008 Enterprise 64GB 64GB

Microsoft Windows Server 2003 R2, Standard x64 EditionMicrosoft Windows Server 2008 Standard (x64)Microsoft Windows Server 2008 R2 Standard

32GB 32GB

Microsoft Windows Server 2003 R2, Enterprise x64 Edition 1TB 128GBMicrosoft Windows Server 2008 Enterprise (x64)Microsoft Windows Server 2008 R2 Enterprise 2TB 128GB

RedHat Enterprise Linux ES4RedHat Enterprise Linux ES4 (EM64T) 16GB 16GB

Maximum memory capacity Usable memory capacity may change depending on the basic architecture (x86

architecture) and support OS.

RedHat Enterprise Linux ES4 (EM64T)RedHat Enterprise Linux AS4 64GB 64GBRedHat Enterprise Linux AS4 (EM64T) 128GB 128GBRedHat Enterprise Linux 5RedHat Enterprise Linux 5 Advanced PlatformRedHat Enterprise Linux 6

16GB 16GB

RedHat Enterprise Linux 5(EM64T)RedHat Enterprise Linux 5 Advanced Platform(EM64T) 256GB 128GB

RedHat Enterprise Linux 6 (x86_64) 2TB 128GB

Page 113: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b)

CPU Blade Configuration

B120b

Hard Disk Drive (with RAID controller)

HDD

HDD

RAID 0 or 1

up to 2

Disk interface

* 146.5GB HDD [N8450-023] (2.5" SAS 10,000rpm, Carrier provided)* 300GB HDD [N8450-024] (2.5" SAS 10,000rpm, Carrier provided)* 600GB HDD [N8450-030] (2.5" SAS 10,000rpm, Carrier provided)* 900GB HDD [N8450-031] (2.5" SAS 10,000rpm, Carrier provided)* 73.2GB HDD [N8450-025] (2.5“ SAS 15,000rpm, Carrier provided)* 146.5GB HDD [N8450-026] (2.5“ SAS 15,000rpm, Carrier provided)

RAID controller [N8403-026](LSILogic SAS controller, 256MB cache)

EL4 EL4x64 EL5x64EL5

2003R2 2003R2x642008x6420082008 R2 ■

EL6x64■EL6 ■

* 146.5GB HDD [N8450-026] (2.5“ SAS 15,000rpm, Carrier provided)* 300GB HDD [N8450-038] (2.5“ SAS 15,000rpm, Carrier provided)* 160GB HDD [N8450-028] (2.5" SATA 7,200rpm, Carrier provided)* 500GB HDD [N8450-029] (2.5" SATA 7,200rpm, Carrier provided)* 1TB HDD [N8450-037] (2.5" SATA 7,200rpm, Carrier provided)

* Disk mirroring (RAID1) allows the disk drives to be hot-swapped.Identical size of disk drives are required for RAID functions.

* Different rotation speed Hard Disk Drives cannot be mixed.* SAS,SATA-HDD cannot be mixed.

Page 114: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b)

CPU Blade Configuration

B120b

Hard Disk Drive (with SATA interface card)

Disk interface SATA interface card [N8403-027]

HDD

HDD

a. non-RAID

up to 2

HDD

HDD

b. RAID 0 or 1

On-board disk array function (LSI Embedded MegaRAID)・By installing SATA interface card, on-board disk array function is enabled.

R2 R2x642008x642008

2008 R2

EL4 EL4x64 EL5x64EL5

2003R2 2003R2x642008x6420082008 R2 ■

EL6x64■EL6 ■

* 160GB HDD [N8450-028] (2.5" SATA 7,200rpm, Carrier provided)* 500GB HDD [N8450-029] (2.5" SATA 7,200rpm, Carrier provided)* 1TB HDD [N8450-037] (2.5" SATA 7,200rpm, Carrier provided)

* To use on-board disk array function, identical size/rotation speed disks are required.* Disk mirroring (RAID1) allows the disk drives to be hot-swapped.

・By installing SATA interface card, on-board disk array function is enabled.・By installing LSI Embedded MegaRAID system support software, internal SATA disks can be configured for

mirroring (RAID1) or striping(RAID0). Two identical size of disk drives are required.・Linux,VMware do not support this on-board disk array function.

Windows Linux VMware

RAIDController

RAID OK OK OK

SATAInterface

Non-RAID OK OK OK *1

RAID OK NG NG

Controller function list

*1: Non-RAID mode only supported (BIOS setting change required)Disk monitoring by ESMPRO Agent and Express report service related modules is not possible.

Page 115: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b)

Standard LAN port

Standard LANInterface

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-H v2/-M

Mezzanine Slot 1 (Type-I only)

(1) Adding a LAN I/F

1GbE LAN interface card

• By installing additional LAN interfaces or Fibre Channel interfaces in the mezzanine slots of the CPU Blade, up to 8 LAN ports or 4 fibre channel ports become available.• Additional LAN interface supports AFT and ALB. But you cannot configure a AFT and ALB team among multiple LAN interfaces.

To Connect Switch Modules to LAN,

CPU Blade Configuration

Standard LAN interface• The standard LAN interface supports AFT and ALB. You cannot use the standard LAN interface and an optional LAN board in a team to configure AFT and ALB. • The standard LAN interface supports the Remote Wake-Up function. Optional LAN boards do not support this function.

EL4 EL4x642003R2 2003R2x64 EL5x64EL52008x6420082008 R2 EL6x64■EL6 ■

(3) Adding a Fibre channel I/F (4Gbps)

1GbE LAN interface card* 1000BASE-T(2ch) adapter [N8403-017] *1 *2* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021] *2

Mezzanine slot 1

Mezzanine slot 1

See “Switch Module Slot” of SIGMABLADE-Hv2/-M

To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

4Gb Fibre channel interface card *2* Fibre Channel Controller(4Gbps/2ch) [N8403-018]

*1: The vIO control function is not supported.*2: Connecting NEC Storage is not supported when the installed OS is RHEL6.

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

(2) Adding a 10GBASE-KR I/F

10GBASE-KR LAN interface card* 10GBASE-KR(2ch) adapter [N8403-035]Mezzanine slot 1

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

* Can be connected to 10GbE Pass-Through card and 10Gb Intelligent L3 Switch (N8406-051F) only.10GBASE-SR SFP+ modules should be installed to the 10GbE Pass-Through card.Connecting NEC Storage is not supported when the installed OS is RHEL6.

(4) Adding a Fibre channel I/F (8Gbps)

Mezzanine slot 1To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

8Gb Fibre channel interface card *2 * Fibre Channel Controller(8Gbps/2ch)[N8403-034]

R2x64R22008x642008 EL5x64 EL5 2008R2

2003R2 2003R2x64 EL5x64EL52008x6420082008 R2■ EL6x64■EL6 ■

Page 116: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b)

Mezzanine Slot 2 (Type I or Type II)

(1) Adding a 2ch LAN I/F (1Gbps Ether)

1GbE LAN interface card* 1000BASE-T(2ch) adapter [N8403-017] *1 *2* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021] *2

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 2

By installing additional LAN interfaces or Fibre Channel interfaces in the mezzanine slots of the CPU Blade,up to 8 LAN ports or 4 fibre channel ports become available. Additional LAN interface supports AFT and ALB. But you cannot configure a AFT and ALB team among multiple LAN interfaces.

CPU Blade Configuration

(2) Adding a 4ch LAN I/F (1Gbps Ether)

1GbE LAN interface card* 1000BASE-T(4ch) adapter [N8403-020] *1 *2 *3* 1000BASE-T(4ch) adapter (for iSCSI) [N8403-022] *2 *3

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 2

*1: The vIO control function is not supported.*2: When you use this Adapter with SIGMABLADE-M, install the 1Gb Intelligent Switch in the switch module slots5 and 6. You cannot install the 1Gb Pass-Through Card in slots 5 and 6.When you use this Adapter with SIGMABLADE-H v2, install the 1Gb Intelligent Switch or 1Gb Pass-Through Card in the switch module slots 5,6,7 and 8.

*1: The vIO control function is not supported.*2: Connecting NEC Storage is not supported when the installed OS is RHEL6.

EL4 EL4x642003R2 2003R2x64 EL5x64EL52008x6420082008 R2 EL6x64■EL6 ■

EL4 EL4x642003R2 2003R2x64 EL5x64EL52008x6420082008 R2 EL6x64■EL6 ■

(5) Adding a Fibre channel IF (4Gbps)

To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 24Gb Fibre channel interface card *3* Fibre Channel Controller(4Gbps/2ch) [N8403-018]

switch module slots 5,6,7 and 8.*3: Connecting NEC Storage is not supported when the installed OS is RHEL6.

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

(4) Adding a 10GBASE-KR I/F

10GBASE-KR LAN interface card* 10GBASE-KR(2ch) adapter [N8403-035] Mezzanine slot 2

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

* Can be connected to 10GbE Pass-Through card and 10Gb Intelligent L3 Switch (N8406-051F) only.10GBASE-SR SFP+ modules should be installed to the 10GbE Pass-Through card.Connecting NEC Storage is not supported when the installed OS is RHEL6.

(6) Adding a Fibre channel IF (8Gbps)

To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 28Gb Fibre channel interface card* Fibre Channel Controller(8Gbps/2ch) [N8403-034]

R2x64R22008x642008 EL5x64 EL5 2008R2

(3) Adding a 10G LAN I/F

10Gbe LAN interface card・10GbE(2ch) adapter [N8403-024]

10GbE Intelligent L3 Switch connection only supportedMezzanine slot 2

* When installed in SIGMABLADE-H v2, two 10GbE Intelligent L3 Switches are required to use 2 ports.Connecting NEC Storage is not supported when the installed OS is RHEL6.

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

2003R2 2003R2x64 EL4x64EL4 EL5x64EL5 ■ ■■ ■2008 R2■ EL6x64■EL6 ■

2003R2 2003R2x64 EL5x64EL52008x6420082008 R2■ EL6x64■EL6 ■

Page 117: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b)

iSCSI Boot

(1) Adding 2ch LAN/IF (for iSCSI boot)

* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021]Mezzanine slot 1

By installing an additional LAN/IF supporting iSCSI Boot, iSCSI Boot become available.

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-M

CPU Blade Configuration

(2) Adding 2ch LAN/IF (for iSCSI boot)

* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021]Mezzanine slot 2 To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-M

(3) Adding 4ch LAN/IF (for iSCSI boot)

* 1000BASE-T(4ch) adapter (for iSCSI) [N8403-022]Mezzanine slot 2 To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-M

2008x6420082008R2

2008x6420082008R2

2008x6420082008R2

Notes for iSCSI boot* To use iSCSI boot, dedicated network is required for storage device.* Only Windows Server 2008/2008R2 supports iSCSI Boot.* Configurations of LAN driver and OS need to be changed.* For the latest information of supported SAN boot, contact your sales representative.

Page 118: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

LAN

Following combinations of option cards and switch modules are supported.* For more details about switch modules, please refer the Blade Enclosure section of thisdocument.

FC

4G F

C S

witc

h(12

port)

4G F

C S

witc

h(24

port)

8G F

C S

witc

h(12

port)

8G F

C S

witc

h(24

port)

FC P

ass-

Thro

ugh

Car

d

N84

06-0

19

N84

06-0

20

N84

06-0

40

N84

06-0

42

N84

06-0

30

SIGMABLADE H v2

CPU Blade (Express5800/B120b)

CPU Blade Configuration( Option card and Switch module )

1Gb

Inte

lligen

t L2

Switc

h

1Gb

Inte

lligen

t L3

Switc

h

1:10

Gb

Inte

lligen

t L3

Sw

itch

1Gb

Pas

s-Th

roug

h C

ard

10G

b P

ass-

Thro

ugh

Car

d

10G

b In

tellig

ent L

3 Sw

itch

N84

06-0

22A

N84

06-0

23N

8406

-023

A

N84

06-0

44

N84

06-0

29

N84

06-0

36

N84

06-0

26

1000BASE-T (2ch) N8403-017 OK OK OK OK OK*1 N84

06-0

19

N84

06-0

20

N84

06-0

40

N84

06-0

42

N84

06-0

30

4G Fibre Channel controller(2ch) N8403-018 OK OK OK OK OK8G Fibre Channel controller(2ch) N8403-034 OK OK

LAN FC

SIGMABLADE M

4G F

C S

witc

h(12

port)

8G F

C S

witc

h(12

port)

FC P

ass-

Thro

ugh

Car

d

N84

06-0

19

N84

06-0

40

N84

06-0

21

4G Fibre Channel controller(2ch) N8403-018 OK OK OK8G Fibre Channel controller(2ch) N8403-034 OK

1000BASE-T (2ch) N8403-017 OK OK OK OK OK*1

1000BASE-T (4ch) N8403-020 OK OK OK OK OK*1

1000BASE-T (2ch) (for iSCSI) N8403-021 OK OK OK OK OK*1

1000BASE-T (4ch) (for iSCSI) N8403-022 OK OK OK OK OK*1

Standard LAN (B120a/B120a-d/B120b/B120b-d) OK OK OK OK OK*1

10Gb-KR (2ch) N8403-035 OK10GbE(2ch) N8403-024 OK

*1 : 10G SFP+(N8406-037) is not supportedblank : not supported

1Gb

Inte

lligen

t L2

Sw

itch

1Gb

Inte

lligen

t L3

Sw

itch

1:10

Gb

Inte

lligen

t L3

Sw

itch

1Gb

Pas

s-Th

roug

h C

ard

10G

b P

ass-

Thro

ugh

Car

d

10G

b In

tellig

ent L

3 S

witc

h

N84

06-0

22A

N84

06-0

23N

8406

-023

A

N84

06-0

44

N84

06-0

11

N84

06-0

35

N84

06-0

26

1000BASE-T (2ch) N8403-017 OK *2 OK *2 OK OK OK *11000BASE-T (4ch) N8403-020 OK OK OK OK OK *11000BASE-T (2ch) (for iSCSI) N8403-021 OK *2 OK *2 OK OK OK *11000BASE-T (4ch) (for iSCSI) N8403-022 OK OK OK OK OK *1Standard LAN (B120a/B120a-d/B120b/B120b-d) OK OK OK OK OK *110Gb-KR (2ch) N8403-035 OK10GbE(2ch) N8403-024 OK

*1 : 10G SFP+(N8406-037) is not supported*2 : 1Gb Interlink Expansion Card (N8406-013) is supportedblank : not supported

Page 119: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b)

CPU Blade Configuration

on-boardRAS chip

Server Management (EXPRESSSCOPE Engine 2)

* EXPRESSSCOPE Engine 2(provided as standard)

* Has a LAN port dedicated for remote management (No expansion slot required)

B120b comes standard with the "EXPRESSSCOPE Engine 2" remote management chip.

Server management function in B120b EXPRESSSCOPE Engine 2

Monitoring serverTemperature /HDD/ fan / electric power / voltage monitoring,including the degeneracy monitoring (CPU/ memory)

Collecting hardware event log

Stall monitoring Monitor booting, BIOS/POST stall, OS stall, shutdown Stall monitoring /Automatic reboot Monitor booting, BIOS/POST stall, OS stall, shutdown

AlertingHW err, Boot err and OS panic (by SNMP,E-Mail)

HW err, boot err, and OS panic (via COM port (modem))

Remote Console(via COM port/LAN)

POST/BIOS setup, DOS utility

Panic screen, Boot screen

CUI screen (OS console)

GUI screen (OS console)

Remote controlling(via COM port/LAN)

Remote reset/power on-off/dump

OS shutdown

Remote media (CD/DVD, FD, Flash) (via LAN)

CLP (Command Line Protocol, DMTF compliant)

Remote control via Web browser (not required dedicated AP)

Remote batch

Scheduling (not requiring UPS)

Maintenance Remote boot (PXE boot), maintenance utility

Others Automatic IP address setting via DNS/DHCP

Remote wakeup Wake On LAN, Wake On Ring

Group management Monitoring/controlling by the group

Industry standard IPMI 2.0

* When H/W remote KVM console function is used, color number is degraded to 65,536 with the resolution of 1280x1024. For more details, see "Server Management" in the Technical Guide.

* All of the above management functions are enabled regardless of OS status.* Some functions depend on the configuration. For more details, see "Server Management" in the Technical Guide.

Page 120: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Memory Installation Guide

Memory Installation Guide (Registered memory)

CPU 2CPU 2CPU 1CPU 11

3

5

2

4

6

CPU 1CPU 11

2

3 4Open

1CPU (Maximum of 4)

2CPU (maximum of 8)The memory installation sequence will differ from that of 1CPU

87

This server adopts new architecture QPI (serial transfer architecture). Memory installation rule for this architecture is different from that for previous architecture FSB (parallel transfer architecture).

•Memory controller is integrated into CPU. The number of installable DIMM depends on the number of CPU.•This installation guide is targeted for multi-core/multi-task application.•1CPU configuration supports 2, 3 way interleave. 2CPU configuration supports 2, 4, 6 way interleave.

* Memory interleave is a technology to improve performance by accessing multiple memory bank at a time.

Memory installation sequence is determined. Installation sequence shall match the numbers indicated in these illustrations according to its capacity from large to small.* When using VMware ESX, at least one memory needs to be connected to each CPU.

CPU Blade (Express5800/B120b)

Memory interleave

• This system uses Independent Channel mode to increase memory bandwidth by installing multiple DIMMs in different channels. Memory interleave is used to increase memory access speed.

• BIOS recognizes memory configuration and enable memory interleave. Some memory areas are not interleaved if they do not allow memory interleave.

CPU 1CPU 12GB

2GB

2GB2GB

CPU 2CPU 22GB

4GB

<Interleaving in single-processor configuration>

CPU 1CPU 1 CPU 2CPU 2

2GB

2GB

2GB

2GB

2GB

See next page for DIMM configuration examples to enable memory interleave.

2-way interleaved

The first 2GB of the 4GB DIMM and the 2GB DIMM in the other channel are 2-way interleaved. The second 2GB of the 4GB DIMM is not interleaved.

6Wayインタリーブで動作 [NUMA(Non-Uniform Memory Access)機能がONの場合はCPUごとに3Wayインタリーブで動作] 2GB

6-way interleaved*3-way interleaved for each processor when NUMA is activated on the BIOS setup menu.

<Interleaving in dual-processor configuration>

Page 121: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

DIMM configuration examples for Memory Interleave (1-processor configuration)• Memory is 2 or 3-way interleaved in single processor configuration. The interleave modes vary by memory configuration. • To increase memory access speed, use memory interleave available configuration.

Memory Size Memory Interleave Mode

2-way 3-way

4GB 2 x 2GB DIMM -

6GB - 3 x 2GB DIMM

8GB4 x 2GB DIMM

-2 x 4GB DIMM

12GB 3 x 4GB DIMM

16GB4 x 4GB DIMM

-2 x 8GB DIMM

24GB - 3 x 8GB DIMM

32GB4 x 8GB DIMM

-

CPU Blade (Express5800/B120b)

32GB -2 x 16GB DIMM

48GB - 3 x 16GB DIMM

64GB 4 x 16GB DIMM -

Page 122: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

DIMM configuration examples for Memory Interleave (2-processor configuration)• Memory is 2, 4, or 6-way interleaved in dual processor configuration. The interleave modes vary by memory configuration. • The table shows DIMM configuration examples for memory interleave when NUMA (Non-Uniform Memory Access) is disabled by default. When NUMA is used, memory interleave is enabled for each processor. See the DIMM configuration examples for 1-processor configuration on the previous page.•To increase memory access speed, use 4-way, 2+6-way, 4+6-way, or 6-way interleave.

CPU Blade (Express5800/B120b)

Memory Interleave Mode2-way 4-way 2+6-way 6-way

4GB 2 x 2GB DIMM - - -6GB - - - -

----

12GB - - - 6 x 2GB DIMM14GB - - - -

4 x 2GB DIMM +2 x 4GB DIMM8 x 2GB DIMM

4 x 4GB DIMM +2 x 2GB DIMM

6 x 2GB DIMM +2 x 4GB DIMM

24GB - - - 6 x 4GB DIMM

28GB - - 2 x 2GB DIMM +6 x 4GB DIMM -

16GB

20GB -

-

10GB - -

2 x 8GB DIMM 4 x 4GB DIMM

-

Memory Size

-

8GB 2 x 4GB DIMM - -

-

28GB - - 6 x 4GB DIMM -

8 x 4GB DIMM4 x 4GB DIMM +2 x 8GB DIMM

6 x 4GB DIMM +2 x 8GB DIMM

2 x 4GB DIMM +4 x 8GB DIMM

48GB - - - 6 x 8GB DIMM50GB - - - -

52GB - - 2 x 2GB DIMM +6 x 8GB DIMM -

56GB - - 2 x 4GB DIMM +6 x 8GB DIMM -

8 x 8GB DIMM4 x 8GB DIMM +2 x 16GB DIMM6 x 8GB DIMM +2 x 16GB DIMM2 x 8GB DIMM +4 x 16GB DIMM

96GB - - - 6 x 16GB DIMM

100GB - - 2 x 2GB DIMM +6 x 16GB DIMM -

128GB - - 8 x 16GB DIMM -

32GB 2 x 16GB DIMM 4 x 8GB DIMM -

-

40GB

- 4 x 16GB DIMM

- -

64GB

80GB -- -

-

Page 123: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Memory clock (Registered)

[example] 1066MH 1066MH

CPU: Xeon X5670/X5650/L5640/E5645/E5606

16GB DIMM is installed

Memory clock: 1066MHz Memory clock: 1333MHz

No

Yes

Memory Clock will differ depending on CPU and presence of Additional Memory Module.

CPU Blade (Express5800/B120b)

CPU 2CPU 2CPU 1CPU 116GB

8GB8GB

8GB

8GB4GB4GB2GB

[example] 1066MHz 1066MHz

Memory RAS Feature

To support memory RAS feature of memory mirroring or lockstep (x8 SDDC), RPQ is required. Contact a sales representative for details.

Page 124: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Express5800/B120a

CPU Blade (Express5800/B120a)

ICON The icons on the configuration chart stand for the supported OS as shown in the following table. For the request to use Linux OS,please also refer to the website

http://www.nec.com/global/prod/express/linux/index.html or contact a sales representative.

・・・ Supported・・・ Certified by Distributor

2008R2

2003R22003R2x64

2003x64

:Windows Server 2008 R2(x64)

:Windows Server 2003,x64 Edition:Windows Server 2003 R2:Windows Server 2003 R2,x64 Edition

20082008x64

:Windows Server 2008:Windows Server 2008(x64)

ESXi4.1 :VMware ESXi 4.1

2003 :Windows Server 2003(with SP1 or later) EL4 : Red Hat Enterprise Linux ES4/AS4EL4x64 : Red Hat Enterprise Linux ES4(EM64T)/AS4(EM64T)

EL5x64: Red Hat Enterprise Linux 5/AP5EL5: Red Hat Enterprise Linux 5(EM64T)/AP5(EM64T)

EL6x64: Red Hat Enterprise Linux 6EL6: Red Hat Enterprise Linux 6(x86_64)

Page 125: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a)Express5800/B120a

High Performance Blade server featuring Xeon processors

Features- Xeon- X5550 / E5504 / E5502 / L5520 processors- Memory: Max.128GB (DDR3)- 1000BASE-X ports x2 as standards

Express5800/B120aN8400-085F N8400-084F N8400-082F N8400-083F

CPU Intel XeonL5520

Intel XeonX5550

Intel XeonE5504

Intel XeonE5502

2.26GHz 2.66GHz 2.0GHz 1.86GHzL3 cache 8MB 4MB

5.86GT/s 4.8GT/s2 (1) /4 core 2 (1) /2 core

Intel 64 SupportIntel Virtualization Technology SupportEnhanced Intel SpeedStep technology SupportIntel HyperThreading technology Support -Intel Turbo Boost technology Support -

Intel 5500Memory DDR3-1,066 Registerd DIMM x4SDDC with the ECC, lockstep mode (x8SDDC), memory mirroring *1

DDR3-1,333 Unbufferd DIMM x4SDDC with the ECC, lockstep mode (x8SDDC), memory mirroring *1

Transferrate

Registerd DIMM 800MHz

Unbufferd DIMM 1066MHz 1066MHz/1333MHz 800MHzStandard None (mandatory option)Maximum 128GB (16GB x8)

Storage Standard Diskless(Up to 2 drives, select from SAS-HDD 73.2GB /146.5GB/300GB/600GB/900GB or SATA-HDD 160GB/500GB/1TB )

Max. 2TB(SAS-HDD 1TB x2)

Model nameN-Code

InternalHDD

Clock Frequency

Maximum (standard) CPU/core number per CPU

Chipset

Intel QuickPath interconnect

Capacity

800MHz/1066MHz

* A set of console devices are required for each system.

*1: RPQ is required to enable RAS functions for memory. In addition, all memory modules should be 2GB/4GB/8GB/16GB to use x4 SDDC function.*2: Not supported if clean-installed by DeploymentManager. (DeploymentManager is not bundled)*3: On-board disk array is not supported.*4: For the request to use Linux OS, please also refer to the website http://www.nec.com/global/prod/express/linux/index.html or regional sales support contact person in NEC/J ITPF Global Business Development Division.

Max. 2TB(SAS-HDD 1TB x2)HOT PLUG SupportDisk Controller SAS / SATA

RAID 0,1Slot 2[2]

1[1] (selection mandatory option)Type-1 slot x1[1] , Type-2 slot x1[1] (Type-1 supported)

Display function Chip set built-in (VRAM: 32MB)640*480 (up to 16,770,000 colors), 800*600 (up to 16,770,000 colors),

1,024*768 (up to 16,770,000 colors), 1,280*1,024 (up to 16,770,000 colors)

1000BASE-X (connected to midplane) x2SUV (Serial x1, VGA x1, USB x2) x1 (When a SUV cable is connected)

6.3kg51.6*515.4*180.7

SIGMABLADE-M , SIGMABLADE-H , SIGMABLADE-H v2Max PowerConsumption

351W(DC) 408W(DC) 336W(DC) 303W(DC)

4,027W(AC) / 4,109VA 4,545W(AC) / 4,638VA 3,891W(AC) / 3,970VA 3,591W(AC) / 3,664VA7,987W(AC) / 8,150VA 9,036W(AC) / 9,220VA 7,711W(AC) / 7,869VA 7,105W(AC) / 7,250VA

Operation time: During operation: 10 to 35C / 20 to 80% (Non-condensing)When stored: -10 to 55C / 20 to 80% (Non-condensing)

NEC EXPRESSBUILDER (including NEC ESMPRO/Manager/Agent)Support OS

Windows

Microsoft Windows Server 2003 R2 Standard Edition *2 / Enterprise Edition *2Microsoft Windows Server 2003 R2 Standard x64 Edition *2 / Enterprise x64 Edition *2

Microsoft Windows Server 2008 Standard *2 / Enterprise *2Microsoft Windows Server 2008 Standard (x64) *2/ Enterprise (x64) *2Microsoft Windows Server 2008 R2 Standard *2,*3 / Enterprise *2,*3

Linux

Red Hat Enterprise Linux ES4 *3,*4 / Red Hat Enterprise Linux ES4(EM64T) *3,*4Red Hat Enterprise Linux AS4 *3,*4 / Red Hat Enterprise Linux AS4(EM64T) *3,*4

Red Hat Enterprise Linux 5 *3,*4 / Red Hat Enterprise Linux 5(EM64T) *3,*4Red Hat Enterprise Linux Advanced Platform 5 *3,*4

Red Hat Enterprise Linux Advanced Platform 5(EM64T) *3,*4

Accessories

Resolution

RAID support

SIGMABLADE-H v2

per CPU Blade

Disk interface slot [open]2.5" Disk Bays [open]

Temperature / Humidity

Dimension (WxDxH mm)

Graphic acceleratorMezzanine slot [open]

SIGMABLADE-M

Supported Enclosure

Interface

Weight (maximum)

Page 126: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a)

Express5800/B120a Quick Sheet

Express5800/B120a

CPU socket, Memory Slot

Expansion Slot

Mezzanine

HDD Slot

Open

Open

Disk I/F

RAID controller

Disk

XeonE5502 / E5504 /X5550 / L5520

Additional memory

Additional memory

Additional memory

Mandatory option

With RAID controller* 146.5GB HDD [N8450-023] (2.5" SAS 10,000rpm, Carrier provided)* 300GB HDD [N8450-024] (2.5" SAS 10,000rpm, Carrier provided)* 600GB HDD [N8450-030] (2.5" SAS 10,000rpm, Carrier provided)* 900GB HDD [N8450-031] (2.5" SAS 10,000rpm, Carrier provided)* 73.2GB HDD [N8450-025] (2.5" SAS 15,000rpm, Carrier provided)* 146.5GB HDD [N8450-026] (2.5" SAS 15,000rpm, Carrier provided)* 300GB HDD [N8450-038] (2.5" SAS 15,000rpm, Carrier provided)* 160GB HDD [N8450-028] (2.5" SATA 7,200rpm, Carrier provided)* 500GB HDD [N8450-029] (2.5" SATA 7,200rpm, Carrier provided)* 1TB HDD [N8450-037] (2.5" SATA 7,200rpm, Carrier provided)SAS,SATA-HDD cannot be mixed.

With SATA interface card* 160GB HDD [N8450-028] (2.5" SATA 7,200rpm, Carrier provided)* 500GB HDD [N8450-029] (2.5" SATA 7,200rpm, Carrier provided)* 1TB HDD [N8450-037] (2.5" SATA 7,200rpm, Carrier provided)

Mezzanine slot 1

Following products are DDR3-1,066MHz Registered typeEach product includes 1 memory module.* Additional 2GB memory module set [N8402-075F] (2GB x1)* Additional 4GB memory module set [N8402-076F] (4GB x1)* Additional 8GB memory module set [N8402-061F] (8GB x1)* Additional 16GB memory module set [N8402-048F] (16GB x1)

Following products are DDR3-1,333MHz Unbuffered typeEach product includes 2 memory module.* Additional 4GB memory module set [N8402-041] (2GB x2)* Registered type and Unbuffered type cannot be mixed.

Memory (DDR3)

Mezzanine Cards for Mezzanine Slots* CPU Kit (Xeon E5504(2.0GHz)) [N8401-036]

* Can be installed on N8400-082F* CPU Kit (Xeon E5502(1.86GHz)) [N8401-037]

* Can be installed on N8400-083F* CPU Kit (Xeon X5550(2.66GHz)) [N8401-038]

* Can be installed on N8400-084F* CPU Kit (Xeon L5520(2.26GHz)) [N8401-039]

* Can be installed on N8400-085F

CPU

SIGMABLADE-H v2,MCPU Blade slot

Mezzanine slot 2

* 1000BASE-T(2ch) adapter [N8403-017]* 1000BASE-T(4ch) adapter [N8403-020] *1* 1000BASE-T(2ch) adapter (iSCSI support) [N8403-021]* 1000BASE-T(4ch) adapter (iSCSI support) [N8403-022] *1* 10GbE(2ch) adapter [N8403-024] *1* 10GBASE-KR adapter [N8403-035]* Fibre Channel controller (4Gbps/2ch) [N8403-018]* Fibre Channel controller (8Gbps/2ch) [N8403-034]

*1: only installed in the type 2 mezzanine slot

SATA interfacecard

alternative

* RAID controller [N8403-026]* SATA interface card [N8403-027]* Selection mandatory option

To use SAS-HDD, RAID controller is necessary.

Additional memory

Additional memory

Additional memory

Additional memory

Open

NoteTo install new options, please update BIOS, firmware, driver and EM firmware.When B120a is installed in SIGMABLADE-H, GbE intelligent switch is required to use PXE boot function.

Page 127: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a)

Express5800/B120a basic configuration

* N8405-016BFSIGMABLADE-M

Dimensions:Width (mm): 484.8Depth (mm): 829Height (mm): 264.2

(6U)* Protruding objects included

* N8400-084F Express5800/B120a (4C/X5550) * Intelligent Switch or

* N8400-083F Express5800/B120a (2C/E5502)Xeon processor E5502(1.86GHz)x1, 4MB 3rd cache, QuickPath interconnect 4.8GT/sMemory selectable (required selection from 2GB /4GB /8GB /16GB), Mezzanine slot x2,2.5" SAS/SATA support, HDD selectable (selection from 73.2GB/146.5GB/300GB/600GB/900GB/160GB/500GB/1TB)1000BASE-X port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

XeonE5502(1.86GHz)

MemoryECC

Max.128GB1000BASE-X

(on board)x2

* N8405-040AFSIGMABLADE-H v2

Dimensions:Width (mm): 483Depth (mm): 823Height (mm): 442

(10U)* Protruding objects included

EXPRESSBUILDER

ESMPRO /ServerManager,Agent

* N8400-082F Express5800/B120a (4C/E5504)Xeon processor E5504(2.0GHz)x1, 4MB 3rd cache, QuickPath interconnect 4.8GT/s,Memory selectable (required selection from 2GB /4GB /8GB /16GB), Mezzanine slot x2,2.5" SAS/SATA support, HDD selectable (selection from 73.2GB/146.5GB/300GB/600GB/900GB/160GB/500GB/1TB)1000BASE-X port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

XeonE5504(2.0GHz)

MemoryECC

Max.128GB1000BASE-X

(on board)x2

EXPRESSBUILDER

ESMPRO /ServerManager,Agent

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

* Remote wake up function allows the CPU Blades to be powered on from a remote console via LAN.* To keep cooling efficiency, any vacant slots in the Blade Enclosure must be covered with slot covers.

* N8400-084F Express5800/B120a (4C/X5550)Xeon processor X5550(2.66GHz)x1, 8MB 3rd cache, QuickPath interconnect 6.4GT/s,Memory selectable (required selection from 2GB /4GB /8GB /16GB), Mezzanine slot x2,2.5" SAS/SATA support, HDD selectable (selection from 73.2GB/146.5GB/300GB/600GB/900GB/160GB/500GB/1TB)1000BASE-X port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

XeonX5550(2.66GHz)

MemoryECC

Max.128GB1000BASE-X

(on board)x2

* Intelligent Switch or Pass-Through Cardis required

* N8400-085F Express5800/B120a (4C/L5520)Xeon processor L5520(2.26GHz)x1, 8MB 3rd cache, QuickPath interconnect 5.86GT/s,Memory selectable (required selection from 2GB /4GB /8GB /16GB), Mezzanine slot x2,2.5" SAS/SATA support, HDD selectable (selection from 73.2GB/146.5GB/300GB/600GB/900GB/160GB/500GB/1TB)1000BASE-X port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

XeonL5520(2.26GHz)

MemoryECC

Max.128GB1000BASE-X

(on board)x2

EXPRESSBUILDER

ESMPRO /ServerManager,Agent

EXPRESSBUILDER

ESMPRO /ServerManager,Agent

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

Page 128: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

1

2

ID

0

CPU Blade (Express5800/B120a)

Express5800/B120a( base machine)

Front ViewPower LED

Power Switch

STATUS LED

Dump switch

Reset switch

Ether1 Link/Access LED

Ether2 Link/Access LED

ID Switch

ID LED

SUV connectorUSB, VGA

HDD Slot

HDD Slot

1

External View

Page 129: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a)

CPU Blade ConfigurationB120a

x8

Not provided as standard. Up to 4 memory modules (64GB) can be installed per CPU. Additional CPU is required to install 5 or more memory modules.Up to 8 memory modules (128GB) can be installed in 2CPU configuration.For more details about the memory performance and RAS functions, please refer to the end of this document (RPQ is required to enable RAS functions.)

CPU

CPU slot

x21 CPU is installed as standard. Up to 2 processors can be installed.N8400-082F supports following CPU* CPU Kit (Xeon E5504(2.0GHz)) [N8401-036]N8400-083F supports following CPU* CPU Kit (Xeon E5502(1.86GHz)) [N8401-037]N8400-084F supports following CPU* CPU Kit (Xeon X5550(2.66GHz)) [N8401-038]N8400-085F supports following CPU* CPU Kit (Xeon L5520(2.26GHz)) [N8401-039]

Memory (Registered type)

Memory Slot

(RPQ is required to enable RAS functions.)For more details about the memory size supported by each OS, please refer to the next page of this document.

* 2GB Additional Memory module(2GB x 1) [N8402-075F]* 4GB Additional Memory module(4GB x 1) [N8402-076F]* 8GB Additional Memory module(8GB x 1) [N8402-061F]* 16GB Additional Memory module(16GB x 1) [N8402-048F]

Important* Registered type and Unbuffered type cannot be mixed.* Up to 4 memory modules (64GB) can be installed per CPU.Additional CPU is required to install 5 or more memory modules.

Page 130: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a)

CPU Blade ConfigurationB120a

x8

Memory Slot Not provided as standard. Up to 2 sets (8GB) can be installed per CPU. Additional CPU is required to install 3 sets or more.Up to 4 sets (16GB) can be installed in 2CPU configuration.For more details about the memory performance and RAS functions, please refer to the end of this document (RPQ is required to enable RAS functions.)For more details about the memory size supported by each OS, please refer to the table below.

* 4GB Additional Memory module(2GB x 2) [N8402-041]

Memory (Unbuffered type)

Important* Registered type and Unbuffered type cannot be mixed.* Up to 2 sets (8GB) can be installed per CPU.Additional CPU is required to install 3 sets or more.

OS

The maximum of the OSSupportMemory:

The biggest available memory capacity(including for OS application)

Microsoft Windows Server 2003 R2, Standard EditionMicrosoft Windows Server 2008 Standard 4GB

* HW-DEP disabledAbout 3.3GB* HW-DEP enabled4GBExecute Disable Bit (XD Bit) is enabled by

default.

Microsoft Windows Server 2003 R2, Enterprise EditionMicrosoft Windows Server 2008 Enterprise 64GB 64GB

Microsoft Windows Server 2003 R2, Standard x64 EditionMicrosoft Windows Server 2008 Standard (x64)Microsoft Windows Server 2008 R2 Standard

32GB 32GB

Microsoft Windows Server 2003 R2, Enterprise x64 Edition 1TB 128GBMicrosoft Windows Server 2008 Enterprise (x64)Microsoft Windows Server 2008 R2 Enterprise 2TB 128GB

RedHat Enterprise Linux ES4RedHat Enterprise Linux ES4 (EM64T) 16GB 16GB

RedHat Enterprise Linux AS4 64GB 64GBRedHat Enterprise Linux AS4 (EM64T) 128GB 128GBRedHat Enterprise Linux 5RedHat Enterprise Linux 5 Advanced Platform 16GB 16GB

RedHat Enterprise Linux 5(EM64T)RedHat Enterprise Linux 5 Advanced Platform(EM64T) 256GB 128GB

Page 131: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a)

CPU Blade Configuration

B120a

Hard Disk Drive (with RAID controller)

HDD

HDD

RAID 0 or 1

up to 2

Disk interface

* 146.5GB HDD [N8450-023] (2.5" SAS 10,000rpm, Carrier provided)* 300GB HDD [N8450-024] (2.5" SAS 10,000rpm, Carrier provided)* 600GB HDD [N8450-030] (2.5" SAS 10,000rpm, Carrier provided)* 900GB HDD [N8450-031] (2.5" SAS 10,000rpm, Carrier provided)* 73.2GB HDD [N8450-025] (2.5“ SAS 15,000rpm, Carrier provided)* 146.5GB HDD [N8450-026] (2.5“ SAS 15,000rpm, Carrier provided)

RAID controller [N8403-026](LSILogic SAS controller, 256MB cache)

EL4x64EL4

R2x64R22008x642008

EL5x64 EL5

2008R2

* 146.5GB HDD [N8450-026] (2.5“ SAS 15,000rpm, Carrier provided) * 300GB HDD [N8450-038] (2.5“ SAS 15,000rpm, Carrier provided) * 160GB HDD [N8450-028] (2.5" SATA 7,200rpm, Carrier provided)* 500GB HDD [N8450-029] (2.5" SATA 7,200rpm, Carrier provided)* 1TB HDD [N8450-037] (2.5" SATA 7,200rpm, Carrier provided)

* Disk mirroring (RAID1) allows the disk drives to be hot-swapped.Identical size of disk drives are required for RAID functions.

* Different rotation speed Hard Disk Drives cannot be mixed.* SAS,SATA-HDD cannot be mixed.

Page 132: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a)

CPU Blade Configuration

B120a

Hard Disk Drive (with SATA interface card)

Disk interface SATA interface card [N8403-027]

HDD

HDD

a. non-RAID

up to 2

HDD

HDD

b. RAID 0 or 1

On-board disk array function (LSI Embedded MegaRAID)・By installing SATA interface card, on-board disk array function is enabled.

EL4x64EL4

R2x64R22008x642008

EL5x64 EL5

2008R2

R2 R2x642008x642008

2008 R2

* 160GB HDD [N8450-028] (2.5" SATA 7,200rpm, Carrier provided)* 500GB HDD [N8450-029] (2.5" SATA 7,200rpm, Carrier provided)* 1TB HDD [N8450-037] (2.5" SATA 7,200rpm, Carrier provided)

* To use on-board disk array function, identical size/rotation speed disks are required.* Disk mirroring (RAID1) allows the disk drives to be hot-swapped.

・By installing SATA interface card, on-board disk array function is enabled.・By installing LSI Embedded MegaRAID system support software, internal SATA disks can be configured for

mirroring (RAID1) or striping(RAID0). Two identical size of disk drives are required.・Linux,VMware do not support this on-board disk array function.

Windows Linux VMware

RAIDController

RAID OK OK OK

SATAInterface

Non-RAID OK OK OK *1

RAID OK NG NG

Controller function list

*1: Supported on ESX/ESXi 4.0 or laterNon-RAID mode only supported (BIOS setting change required)Disk monitoring by ESMPRO Agent and Express report service related modules is not possible.

Page 133: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a)

Standard LAN port

Standard LANInterface

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-H v2/-M

Mezzanine Slot 1 (Type-I only)

(1) Adding a LAN I/F

1GbE LAN interface card

• By installing additional LAN interfaces or Fibre Channel interfaces in the mezzanine slots of the CPU Blade, up to 8 LAN ports or 4 fibre channel ports become available.• Additional LAN interface supports AFT and ALB. But you cannot configure a AFT and ALB team among multiple LAN interfaces.

To Connect Switch Modules to LAN,

CPU Blade Configuration

Standard LAN interface• The standard LAN interface supports AFT and ALB. You cannot use the standard LAN interface and an optional LAN board in a team to configure AFT and ALB. • The standard LAN interface supports the Remote Wake-Up function. Optional LAN boards do not support this function.

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

1GbE LAN interface card* 1000BASE-T(2ch) adapter [N8403-017] *1* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021]

Mezzanine slot 1 See “Switch Module Slot” of SIGMABLADE-Hv2/-M

*1: The vIO control function is not supported.

(2) Adding a 10GBASE-KR I/F

10GBASE-KR LAN interface card* 10GBASE-KR(2ch) adapter [N8403-035]Mezzanine slot 1

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

* Can be connected to 10GbE Pass-Through card and 10Gb Intelligent L3 Switch (N8406-051F) only.10GBASE-SR SFP+ modules should be installed to the 10GbE Pass-Through card.

R2x64R22008x642008 EL5x64 EL5 2008R2

(3) Adding a Fibre channel I/F (4Gbps)

Mezzanine slot 1To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

4Gb Fibre channel interface card* Fibre Channel Controller(4Gbps/2ch) [N8403-018]

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

(4) Adding a Fibre channel I/F (8Gbps)

Mezzanine slot 1To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

8Gb Fibre channel interface card* Fibre Channel Controller(8Gbps/2ch)[N8403-034]

R2x64R22008x642008 EL5x64 EL5 2008R2

* RHEL5 version should be RHEL5.4 or later.

Page 134: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a)

Mezzanine Slot 2 (Type I or Type II)

(1) Adding a 2ch LAN I/F (1Gbps Ether)

1GbE LAN interface card* 1000BASE-T(2ch) adapter [N8403-017] *1* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021]

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 2

By installing additional LAN interfaces or Fibre Channel interfaces in the mezzanine slots of the CPU Blade,up to 8 LAN ports or 4 fibre channel ports become available. Additional LAN interface supports AFT and ALB. But you cannot configure a AFT and ALB team among multiple LAN interfaces.

CPU Blade Configuration

(2) Adding a 4ch LAN I/F (1Gbps Ether)

1GbE LAN interface card* 1000BASE-T(4ch) adapter [N8403-020] *1, *2* 1000BASE-T(4ch) adapter (for iSCSI) [N8403-022] *2

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 2

*1: The vIO control function is not supported.*2: When you use this Adapter with SIGMABLADE-M, install the 1Gb Intelligent Switch in the switch module slots5 and 6. You cannot install the 1Gb Pass-Through Card in slots 5 and 6.When you use this Adapter with SIGMABLADE-H v2, install the 1Gb Intelligent Switch or 1Gb Pass-Through Card in the switch module slots 5,6,7 and 8.

*1: The vIO control function is not supported.

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

(3) Adding a 10Gbps LAN/IF

10GbE LAN interface card* 10GbE(2ch) adapter [N8403-024]

Only support 10Gb Intelligent L3 switch

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 2

* When you use this 10GbE Adapter (2ch) with SIGMABLADE-H v2, you need two 10GbE L3 Switches to use both of the two ports.

(4) Adding a 10GBASE-KR I/F

10GBASE-KR LAN interface card* 10GBASE-KR(2ch) adapter [N8403-035] Mezzanine slot 2

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

* Can be connected to 10GbE Pass-Through card and 10Gb Intelligent L3 Switch (N8406-051F) only.10GBASE-SR SFP+ modules should be installed to the 10GbE Pass-Through card.

R2x64R22008x642008 EL5x64 EL5 2008R2

(5) Adding a Fibre channel IF (4Gbps)

To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 24Gb Fibre channel interface card* Fibre Channel Controller(4Gbps/2ch) [N8403-018]

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

(6) Adding a Fibre channel IF (8Gbps)

To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 28Gb Fibre channel interface card* Fibre Channel Controller(8Gbps/2ch) [N8403-034]

R2x64R22008x642008 EL5x64 EL5 2008R2

R2 R2x64 EL4x64EL4 EL5x64EL5 ■ ■■ ■2008 R2■

* RHEL5 version should be RHEL5.4 or later.

Page 135: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a)

iSCSI Boot

(1) Adding 2ch LAN/IF (for iSCSI boot)

* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021]Mezzanine slot 1

By installing an additional LAN/IF supporting iSCSI Boot, iSCSI Boot become available.

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-M

(2) Adding 2ch LAN/IF (for iSCSI boot)

* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021]Mezzanine slot 2 To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-M

(3) Adding 4ch LAN/IF (for iSCSI boot)

* 1000BASE-T(4ch) adapter (for iSCSI) [N8403-022]Mezzanine slot 2 To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-M

2008x6420082008R2

2008x6420082008R2

2008x6420082008R2

CPU Blade Configuration

Notice for iSCSI boot* To use iSCSI boot, dedicated network is required for storage device.* Only Windows Server 2008/2008R2 support iSCSI Boot.* Configurations of LAN driver and OS should be changed.* For the latest information of supported SAN boot, contact your sales representative.

Page 136: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

LAN

Following combinations of option cards and switch modules are supported.* For more details about switch modules, please refer the Blade Enclosure section of thisdocument.

FC

1Gb

Inte

lligen

t L2

Switc

h

1Gb

Inte

lligen

t L3

Switc

h

1:10

Gb

Inte

lligen

t L3

Sw

itch

1Gb

Pas

s-Th

roug

h C

ard

10G

b Pa

ss-T

hrou

gh C

ard

10G

b In

tellig

ent L

3 S

witc

h

N84

06-0

22A

N84

06-0

23N

8406

-023

A

N84

06-0

44

N84

06-0

29

N84

06-0

36

N84

06-0

26

4G F

C S

witc

h(12

port)

4G F

C S

witc

h(24

port)

8G F

C S

witc

h(12

port)

8G F

C S

witc

h(24

port)

FC P

ass-

Thro

ugh

Car

d

N84

06-0

19

N84

06-0

20

N84

06-0

40

N84

06-0

42

N84

06-0

30

SIGMABLADE H v2

CPU Blade (Express5800/B120a)

CPU Blade Configuration( Option card and Switch module )

1000BASE-T (2ch) N8403-017 OK OK OK OK OK*1

1000BASE-T (4ch) N8403-020 OK OK OK OK OK*1

1000BASE-T (2ch) (for iSCSI) N8403-021 OK OK OK OK OK*1

1000BASE-T (4ch) (for iSCSI) N8403-022 OK OK OK OK OK*1

Standard LAN (B120a/B120a-d/B120b/B120b-d) OK OK OK OK OK*1

10Gb-KR (2ch) N8403-035 OK10GbE Adapter (2ch) N8403-024 OK

*1 : 10G SFP+(N8406-037) is not supportedblank : not supported

N84

06-0

19

N84

06-0

20

N84

06-0

40

N84

06-0

42

N84

06-0

30

4G Fibre Channel controller(2ch) N8403-018 OK OK OK OK OK8G Fibre Channel controller(2ch) N8403-034 OK OK

LAN FC

SIGMABLADE M

1Gb

Inte

lligen

t L2

Switc

h

1Gb

Inte

lligen

t L3

Switc

h

1:10

Gb

Inte

lligen

t L3

Sw

itch

1Gb

Pas

s-Th

roug

h C

ard

10G

b P

ass-

Thro

ugh

Car

d

10G

b In

tellig

ent L

3 Sw

itch

N84

06-0

22A

N84

06-0

23N

8406

-023

A

N84

06-0

44

N84

06-0

11

N84

06-0

35

N84

06-0

26

1000BASE-T (2ch) N8403-017 OK*2 OK*2 OK OK OK*1

1000BASE-T (4ch) N8403-020 OK OK OK OK OK*1

1000BASE-T (2ch) (for iSCSI) N8403-021 OK*2 OK*2 OK OK OK*1

1000BASE-T (4ch) (for iSCSI) N8403-022 OK OK OK OK OK*1

Standard LAN (B120a/B120a-d/B120b/B120b-d) OK OK OK OK OK*1

10Gb-KR (2ch) N8403-035 OK10GbE Adapter (2ch) N8403-024 OK

*1 : 10G SFP+(N8406-037) is not supported*2 : 1Gb Interlink Expansion Card (N8406-013) is supportedblank : not supported

4G F

C S

witc

h(12

port)

8G F

C S

witc

h(12

port)

FC P

ass-

Thro

ugh

Car

d

N84

06-0

19

N84

06-0

40

N84

06-0

21

4G Fibre Channel controller(2ch) N8403-018 OK OK OK8G Fibre Channel controller(2ch) N8403-034 OK

Page 137: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a)

CPU Blade Configuration

on-boardRAS chip

Server Management (EXPRESSSCOPE Engine 2)

* EXPRESSSCOPE Engine 2(provided as standard)

* Has a LAN port dedicated for remote management (No expansion slot required)

B120a comes standard with the "EXPRESSSCOPE Engine 2" remote management chip.

Server management function in B120a EXPRESSSCOPE Engine 2

Monitoring serverTemperature /HDD/ fan / electric power / voltage monitoring,including the degeneracy monitoring (CPU/ memory)

Collecting hardware event log

Stall monitoring Monitor booting, BIOS/POST stall, OS stall, shutdown Stall monitoring /Automatic reboot Monitor booting, BIOS/POST stall, OS stall, shutdown

AlertingHW err, Boot err and OS panic (by SNMP,E-Mail)

HW err, boot err, and OS panic (via COM port (modem))

Remote Console(via COM port/LAN)

POST/BIOS setup, DOS utility

Panic screen, Boot screen

CUI screen (OS console)

GUI screen (OS console)

Remote controlling(via COM port/LAN)

Remote reset/power on-off/dump

OS shutdown

Remote media (CD/DVD, FD, Flash) (via LAN)

CLP (Command Line Protocol, DMTF compliant)

Remote control via Web browser (not required dedicated AP)

Remote batch

Scheduling (not requiring UPS)

Maintenance Remote boot (PXE boot), maintenance utility

Others Automatic IP address setting via DNS/DHCP

Remote wakeup Wake On LAN, Wake On Ring

Group management Monitoring/controlling by the group

Industry standard IPMI 2.0

* When H/W remote KVM console function is used, color number is degraded to 65,536 with the resolution of 1280x1024. For more details, see "Server Management" in the Technical Guide.

* All of the above management functions are enabled regardless of OS status.* Some functions depend on the configuration. For more details, see "Server Management" in the Technical Guide.

Page 138: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Memory Installation Guide

Memory Installation Guide (Registered memory)

CPU 2CPU 2CPU 1CPU 11

3

5

2

4

6

CPU 1CPU 11

2

3 4Open

1CPU (Maximum of 4)

2CPU (maximum of 8)The memory installation sequence will differ from that of 1CPU

87

This server adopts new architecture QPI (serial transfer architecture). Memory installation rule for this architecture is different from that for previous architecture FSB (parallel transfer architecture).

•Memory controller is integrated into CPU. The number of installable DIMM depends on the number of CPU.•This installation guide is targeted for multi-core/multi-task application.•1CPU configuration supports 2, 3 way interleave. 2CPU configuration supports 2, 4, 6 way interleave.

* Memory interleave is a technology to improve performance by accessing multiple memory bank at a time.

Memory installation sequence is determined. Installation sequence shall match the numbers indicated in these illustrations according to its capacity from large to small.* When using VMware ESX, at least one memory needs to be connected to each CPU.

CPU Blade (Express5800/B120a)

Memory interleave

• This system uses Independent Channel mode to increase memory bandwidth by installing multiple DIMMs in different channels. Memory interleave is used to increase memory access speed.

• BIOS recognizes memory configuration and enable memory interleave. Some memory areas are not interleaved if they do not allow memory interleave.

CPU 1CPU 12GB

2GB

2GB2GB

CPU 2CPU 22GB

4GB

<Interleaving in single-processor configuration>

CPU 1CPU 1 CPU 2CPU 2

2GB

2GB

2GB

2GB

2GB

See next page for DIMM configuration examples to enable memory interleave.

2-way interleaved

The first 2GB of the 4GB DIMM and the 2GB DIMM in the other channel are 2-way interleaved. The second 2GB of the 4GB DIMM is not interleaved.

6Wayインタリーブで動作 [NUMA(Non-Uniform Memory Access)機能がONの場合はCPUごとに3Wayインタリーブで動作] 2GB

6-way interleaved*3-way interleaved for each processor when NUMA is activated on the BIOS setup menu.

<Interleaving in dual-processor configuration>

Page 139: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

DIMM configuration examples for Memory Interleave (1-processor configuration)• Memory is 2 or 3-way interleaved in single processor configuration. The interleave modes vary by memory configuration. • To increase memory access speed, use memory interleave available configuration.

Memory Size Memory Interleave Mode

2-way 3-way

4GB 2 x 2GB DIMM -

6GB - 3 x 2GB DIMM

8GB4 x 2GB DIMM

-2 x 4GB DIMM

12GB - 3 x 4GB DIMM

16GB2 x 8GB DIMM

-4 x 4GB DIMM

24GB - 3 x 8GB DIMM

32GB4 x 8GB DIMM

-

CPU Blade (Express5800/B120a)

32GB -2 x 16GB DIMM

48GB - 3 x 16GB DIMM

64GB 4 x 16GB DIMM -

Page 140: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

DIMM configuration examples for Memory Interleave (2-processor configuration)• Memory is 2, 4, or 6-way interleaved in dual processor configuration. The interleave modes vary by memory configuration. • The table shows DIMM configuration examples for memory interleave when NUMA (Non-Uniform Memory Access) is disabled by default. When NUMA is used, memory interleave is enabled for each processor. See the DIMM configuration examples for 1-processor configuration on the previous page.•To increase memory access speed, use 4-way, 2+6-way, 4+6-way, or 6-way interleave.

CPU Blade (Express5800/B120a)

Memory Interleave Mode2-way 4-way 2+6-way 6-way

4GB 2 x 2GB DIMM - - -8GB 2 x 4GB DIMM 4 x 2GB DIMM - -

10GB - - - -12GB - - - 6 x 2GB DIMM14GB - - - -

4 x 2GB DIMM +2 x 4GB DIMM8 x 2GB DIMM

4 x 4GB DIMM +2 x 2GB DIMM

6 x 2GB DIMM +2 x 4GB DIMM

24GB - - - 6 x 4GB DIMM

28GB - - 2 x 2GB DIMM +6 x 4GB DIMM -

8 x 4GB DIMM4 x 4GB DIMM +

Memory Size

16GB

20GB -

-2 x 8GB DIMM 4 x 4GB DIMM

- -

32GB 2 x 16GB DIMM 4 x 8GB DIMM -4 x 4GB DIMM +2 x 8GB DIMM

6 x 4GB DIMM +2 x 8GB DIMM

2 x 4GB DIMM +4 x 8GB DIMM

48GB - - - 6 x 8GB DIMM

52GB - - 2 x 2GB DIMM +6 x 8GB DIMM -

56GB - - 2 x 4GB DIMM +6 x 8GB DIMM -

8 x 8GB DIMM4 x 8GB DIMM +2 x 16GB DIMM6 x 8GB DIMM +2 x 16GB DIMM2 x 8GB DIMM +4 x 16GB DIMM

96GB - - - 6 x 16GB DIMM

100GB - - 2 x 2GB DIMM +6 x 16GB DIMM -

128GB - - 8 x 16GB DIMM -

32GB 2 x 16GB DIMM 4 x 8GB DIMM -

-

40GB

- 4 x 16GB DIMM

- -

64GB

80GB -- -

-

Page 141: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Memory clock (Registered)

[example] 800MH 800MH

CPU: Xeon E5502 or E5504 CPU: Xeon L5520, X5550, or X5570

4 or more DIMMs areInstalled per processor

Memory clock: 800MHz

Yes

Memory clock: 1066MHz

No

Yes

8GB or 16GB Additional Memory Module is installed

No

Memory Clock will differ depending on CPU and presence of Additional Memory Module.

CPU Blade (Express5800/B120a)

CPU 2CPU 2CPU 1CPU 18GB

8GB8GB

8GB

8GB4GB4GB2GB

[example] 800MHz 800MHz

Memory RAS Feature

To support memory RAS feature of memory mirroring and lockstep (x8 SDDC), RPQ is required. Contact ITNWGSL for details.

To use x4 SDDC, replace all the memory with 2GB / 4GB / 8GB / 16GB memory.

CPU: Xeon L5520, X5550, or X5570

Page 142: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Memory Installation Guide

Memory Installation Guide (Unbuffered memory)

CPU 2CPU 2CPU 1CPU 11

2

3

1

2

3

CPU 1CPU 11

1

2 2Open

1CPU (Maximum of 4)

2CPU (maximum of 8)The memory installation sequence will differ from that of 1CPU

44

This server adopts new architecture QPI (serial transfer architecture). Memory installation rule for this architecture is different from that for previous architecture FSB (parallel transfer architecture).

•Memory controller is integrated into CPU. The number of installable DIMM depends on the number of CPU.•This installation guide is targeted for multi-core/multi-task application.•1CPU configuration supports 2, 3 way interleave. 2CPU configuration supports 2, 4, 6 way interleave.

* Memory interleave is a technology to improve performance by accessing multiple memory bank at a time.

Memory installation sequence is determined. Installation sequence shall match the numbers indicated in these illustrations.* When using VMware ESX, at least one memory needs to be connected to each CPU.

CPU Blade (Express5800/B120a)

Memory interleave

• This system uses Independent Channel mode to increase memory bandwidth by installing multiple DIMMs in different channels. Memory interleave is used to increase memory access speed.

• BIOS recognizes memory configuration and enable memory interleave. Some memory areas are not interleaved if they do not allow memory interleave.

CPU 1CPU 12GB

2GB

2GB2GB

<Interleaving in single-processor configuration>

CPU 1CPU 1 CPU 2CPU 2

2GB

2GB

2GB

2GB

2GB

See next page for DIMM configuration examples to enable memory interleave.

2-way interleaved

6Wayインタリーブで動作 [NUMA(Non-Uniform Memory Access)機能がONの場合はCPUごとに3Wayインタリーブで動作] 2GB

6-way interleaved*3-way interleaved for each processor when NUMA is activated on the BIOS setup menu.

<Interleaving in dual-processor configuration>

Page 143: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Memory clock (Unbuffered)

[example] 1066MH 1066MH

CPU: Xeon E5502 or E5504 CPU: Xeon L5520

4 or more DIMMs areInstalled per processor

Memory clock: 800MHz Memory clock: 1066MHz

No

Yes

Memory Clock will differ depending on CPU and presence of Additional Memory Module.

CPU Blade (Express5800/B120a)

CPU: Xeon X5550

Memory clock: 1333MHz

CPU 2CPU 2CPU 1CPU 116GB

8GB8GB

8GB

8GB4GB2GB

[example] 1066MHz 1066MHz

Memory RAS Feature

To support memory RAS feature of memory mirroring and lockstep (x8 SDDC), RPQ is required. Contact ITNWGSL for details.

CPU: Xeon X5550 or X5570

4GB

Page 144: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Express5800/B120b-h

CPU Blade (Express5800/B120b-h)

ICON The icons on the configuration chart stand for the supported OS as shown in the following table. For the request to use Linux OS,please also refer to the website

http://www.nec.com/global/prod/express/linux/index.html or contact a sales representative.

・・・ Supported・・・ Certified by Distributor

2008R2

2003R22003R2x64

2003x64

:Windows Server 2008 R2(x64)

:Windows Server 2003,x64 Edition:Windows Server 2003 R2:Windows Server 2003 R2,x64 Edition

20082008x64

:Windows Server 2008:Windows Server 2008(x64)

ESXi4.1 :VMware ESXi 4.1

2003 :Windows Server 2003(with SP1 or later) EL4 : Red Hat Enterprise Linux ES4/AS4EL4x64 : Red Hat Enterprise Linux ES4(EM64T)/AS4(EM64T)

EL5x64: Red Hat Enterprise Linux 5/AP5EL5: Red Hat Enterprise Linux 5(EM64T)/AP5(EM64T)

EL6x64: Red Hat Enterprise Linux 6EL6: Red Hat Enterprise Linux 6(x86_64)

Page 145: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-h)Express5800/B120b-h( base machine)

High-end HDD-less Blade ServerFeatures- Xeon- X5680 / X5650 / E5645 / L5640 / L5630 processors- Memory: Max.192GB (DDR3)- 10GBASE-KR ports x2 as standards- Support VMware ESXiTM 4.1

Express5800/B120b-hN8400-104F N8400-103F N8400-102F N8400-100F N8400-099F

CPUIntel Xeon Processor

low voltageL5630

Intel Xeon Processorlow voltage

L5640

Intel Xeon ProcessorE5645

Intel Xeon ProcessorX5650

Intel Xeon ProcessorX5680

2.13GHz 2.26GHz 2.4GHz 2.66GHz 3.33GHz5.86GT/s 6.4GT/s

L3 cache 12MB2(1)/4 core 2 (1) /6 core

Intel 64 SupportIntel Virtualization technology SupportEnhanced Intel SpeedStep technology SupportIntel Hyper-Threading technology SupportIntel Turbo Boost technology Support

Intel 5500Memory: DDR3-1066/1333 Registerd DIMM x4SDDC with the ECC, lockstep mode (x8SDDC), memory mirroring *1

Speed Registerd DIMM 1066MHz/1333MHzStandard None (mandatory option)Maximum Max. 192G byte (16GB x 12)

StorageDevice

InternalHDD

Standard Diskless(No internal disk drives available)

InternalSSD

Standard Diskless(Up to 2 SSD are installable)

Diskless(Up to 1 SSD is installable)

Maximum 200GB (internal SSD 100GB x2) 100GB (internal SSD 100GB x1)RAID RAID 0/1 -

Interface SATA

Model nameN-Code

Clock Frequency

Maximum (standard) / core per CPU

Chipset

Intel QuickPath interconnect

Capacity

* A set of console devices are required for each system.* For configuration details, see the System Configuration Guide provided for the Blade Enclosure.

*1: RPQ is required to enable RAS functions for memory. In addition, all memory modules should be 2GB/4GB/8GB/16GB to use x4 SDDC function.*2: Service Pack2 (SP2) or later is required.

Slot 0[0]Type-1 slot x1[1] , Type-2 slot x1[1] (Type-1 supported)

Displayfunction

integrated into the chipset (standard)

32MB640*480 (up to 16,770,000 colors), 800*600 (up to 16,770,000 colors),

1,024*768 (up to 16,770,000 colors), 1,280*1,024 (up to 16,770,000 colors)10GBASE-KR(connected to Mid Plane) x2

SUV(Serial x1, VGA x1, USB x2) connector x1 (at the time of SUV cable connection)6kg

51.6*515.4*180.7SIGMABLADE-M , SIGMABLADE-H v2

Power 265W(DC) 312W(DC) 346W(DC) 382W(DC) 453W(DC)3,245W(AC)/3,312VA 3,673W(AC)/3,748VA 3,982W(AC)/4,063VA 4,309W(AC)/4,397VA 4,955W(AC)/5,056VA6,406W(AC)/6,536VA 7,270W(AC)/7,418VA 7,895W(AC)/8,057VA 8,557W(AC)/8,732VA 9,863W(AC)/10,065VA

Operation time: During operation: 10 to 35C / 20 to 80% (Non-condensing)When stored: -10 to 55C / 20 to 80% (Non-condensing)

NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)OS

Windows

Microsoft Windows Server 2003 R2 Standard Edition *2 / Enterprise Edition *2Microsoft Windows Server 2003 R2 Standard x64 Edition *2 / Enterprise x64 Edition *2

Microsoft Windows Server 2008 Standard / EnterpriseMicrosoft Windows Server 2008 Standard (x64) / Enterprise (x64)

Microsoft Windows Server 2008 R2 Standard / Enterprise

Linux

Red Hat Enterprise Linux 5 / Red Hat Enterprise Linux 5(EM64T)Red Hat Enterprise Linux Advanced Platform 5

Red Hat Enterprise Linux Advanced Platform 5(EM64T)

Vmware VMware ESXi 4.1

Accessories

Resolution (Max.)Video RAM

2.5" Disk Bays [open]

Temperature / humidity conditions

with SIGMABLADE-Mper CPU Blade

Weight (a maximum)

Graphic accelerator

Supported Enclosure

with SIGMABLADE-H v2

Size (WxDxH mm))

Interface

Mezzanine slot [open]

Page 146: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-h)

Express5800/B120b-h Quick Sheet

Express5800/B120b-hCPU socket, Memory slot

Expansion slot

Mezzanine slot 1

Memory (DDR3) CPU

Mezzanine slot 2

Additional Additional

Additional

Additional

Additional

Additional Additional

Mandatory

Additional

Additional

Additional

Additional

Additional

Additional

Additional

Additional

Additional

AdditionalXeon

X5680 / X5650 / E5645 / L5640 /

L5630

Open

SSD slot

Open

Open *1

* 100GB SSD [N8450-703]

*1:N8400-099F hasonly 1 slot

Following products are DDR3-1333MHz Registered type * CPU Kit (Xeon X5680(3.33GHz)) [N8401-046F]

Internal USB

Internal port

Following products are DDR3-1333MHz Registered type* Additional 2GB memory module set [N8402-088F] (2GB x1)* Additional 4GB memory module set [N8402-089F] (4GB x1)* Additional 8GB memory module set [N8402-052F] (8GB x1)

Following products are DDR3-1066MHz Registered type* Additional 16GB memory module set [N8402-053F] (16GB x1)

* CPU Kit (Xeon X5680(3.33GHz)) [N8401-046F]* Can be installed on N8400-099F

* CPU Kit (Xeon X5650(2.66GHz)) [N8401-047F]* Can be installed on N8400-100F

* CPU Kit (Xeon E5645(2.4GHz)) [N8401-048F]* Can be installed on N8400-102F

* CPU Kit (Xeon L5640(2.26GHz)) [N8401-050F]* Can be installed on N8400-103F

* CPU Kit (Xeon L5630(2.13GHz)) [N8401-051F]* Can be installed on N8400-104F

Mezzanine Cards for Mezzanine Slots* 1000BASE-T(2ch) adapter [N8403-017]* 1000BASE-T(4ch) adapter [N8403-020] *1* 1000BASE-T(2ch) adapter (iSCSI support) [N8403-021]* 1000BASE-T(4ch) adapter (iSCSI support) [N8403-022] *1* 10GBASE-KR adapter [N8403-035]* Fibre Channel controller (4Gbps/2ch) [N8403-018]* Fibre Channel controller (8Gbps/2ch) [N8403-034]

*1: only installed in the type 2 mezzanine slot

SIGMABLADE-H v2,MCPU Blade slot

Note* If this blade is installed in a SIGMABLADE-Hv2, the number of necessary fan is different from the other type of blade. For more details, please check blade enclosure section of this document.* To install new options, please update BIOS, firmware, driver and EM firmware.

Internal USB option* VMware ESXi 4.1 base kit [N8403-038F]

VMware ESXi 4.1 Standalone(2CPU) is pre-installed in the USB flash memory- For BTO installation only- The USB flash memory is for VMware ESXi only

Page 147: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-h)

Express5800/B120b-h basic configuration

* N8405-016BFSIGMABLADE-MDimensions:Width (mm): 484.8Depth (mm): 829Height (mm): 264.2

(6U)* Protruding objects included

* N8405-040AFSIGMABLADE-H v2

Dimensions:Width (mm): 483Depth (mm): 823Height (mm): 442

(10U)* Protruding objects included

* N8400-099F Express5800/B120b-h (6C/X5680)Xeon processor X5680(3.33GHz)x1, 12MB 3rd cache, QuickPath interconnect 6.4GT/s,Memory selectable (required selection from 2GB /4GB /8GB /16GB), Mezzanine slot x2,2.5 inch SATA (SSD) supported10GBASE-KR port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

XeonX5680(3.33GHz)

MemoryECC

Max.192GB10GBASE-KR

(on board)

x2

EXPRESSBUILDER

ESMPRO /Manager,Agent

* N8400-100F Express5800/B120b-h (6C/X5650)Xeon processor X5650(2.66GHz)x1, 12MB 3rd cache, QuickPath interconnect 6.4GT/s,Memory selectable (required selection from 2GB /4GB /8GB /16GB), Mezzanine slot x2,2.5 inch SATA (SSD) supported10GBASE-KR port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

XeonX5650(2.66GHz)

MemoryECC

Max.192GB10GBASE-KR

(on board)

x2

EXPRESSBUILDER

ESMPRO /Manager,Agent

R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

* N8400-102F Express5800/B120b-h (6C/E5645)Xeon processor E5645(2.4GHz)x1, 12MB 3rd cache, QuickPath interconnect 5.86GT/s,

R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

* Remote wake up functionAllows the CPU Blades to be powered on from a remote console via LAN.

* To keep cooling efficiency, any vacant slots in the Blade Enclosure must be covered with slot covers.

* Intelligent Switchor Pass-Through Cardis required

* N8400-103F Express5800/B120b-h (6C/L5640)Xeon processor L5640(2.26GHz)x1, 12MB 3rd cache, QuickPath interconnect 5.86GT/s,Memory selectable (required selection from 2GB /4GB /8GB /16GB), Mezzanine slot x2,2.5 inch SATA (SSD) supported10GBASE-KR port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

XeonL5640(2.26GHz)

MemoryECC

Max.192GB10GBASE-KR

(on board)

x2

EXPRESSBUILDER

ESMPRO /Manager,Agent

R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

XeonL5630(2.13GHz)

MemoryECC

Max. 192GB10GBASE-KR

(On board)

x2

EXPRESSBUILDER

ESMPRO /Manager,Agent

* N8400-104F Express5800/B120b-h (4C/L5630)Xeon processor L5630(2.13GHz)x1, 12MB 3rd cache, QuickPath interconnect 5.86GT/sMemory selectable (required selection from 2GB /4GB /8GB /16GB), Mezzanine slot x2,2.5 inch SATA (SSD) supported10GBASE-KR port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

R2x642008x642008 EL5x64EL52008 R2 R2 ESXi4

Xeon processor E5645(2.4GHz)x1, 12MB 3rd cache, QuickPath interconnect 5.86GT/s,Memory selectable (required selection from 2GB /4GB /8GB /16GB), Mezzanine slot x2,2.5 inch SATA (SSD) supported10GBASE-KR port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

XeonE5645(2.4GHz)

MemoryECC

Max.192GB10GBASE-KR

(on board)

x2

EXPRESSBUILDER

ESMPRO /Manager,Agent

Page 148: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-h)

Express5800/B120b-h( base machine)

Front View

Power LEDPower Switch

STATUS LEDDump switch

Reset switchEther1 Link/Access LED

Ether2 Link/Access LED

ID Switch

ID LED

SUV connectorUSB, VGA

External View

Page 149: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-h)

CPU Blade ConfigurationB120b-h

Not provided as standard. Up to 9 memory modules (MAX:96GB(16GBx6)) can be installed per CPU. Additional CPU is required to install 10 or more memory modules. Up to 18 memory modules (MAX:192GB(16GBx12)) can be installed in 2CPU configuration.

CPU

CPU slot

x21 CPU is installed as standard. Up to 2 processors can be installed.

N8400-99F supports following CPU* CPU Kit (Xeon X5680(3.33GHz)) [N8401-046F]

N8400-100F supports following CPU* CPU Kit (Xeon X5650(2.66GHz)) [N8401-047F]

N8400-102F supports following CPU* CPU Kit (Xeon E6545(2.4GHz)) [N8401-048F]

N8400-103F supports following CPU* CPU Kit (Xeon L5640(2.26GHz)) [N8401-050F]

N8400-104F supports following CPU* CPU Kit (Xeon L5630(2.13GHz)) [N8401-051F]

Memory (Registered type)Memory Slot

x18(MAX:192GB(16GBx12)) can be installed in 2CPU configuration.For more details about the memory performance and RAS functions, please refer to the end of this document (RPQ is required to enable RAS functions.) For more details about the memory size supported by each OS, please refer to the next page of this document.

* 2GB Additional Memory module(2GB x 1) [N8402-088F]* 4GB Additional Memory module(4GB x 1) [N8402-089F]* 8GB Additional Memory module(8GB x 1) [N8402-052F]* 16GB Additional Memory module(16GB x 1) [N8402-053F]

Important* Up to 9 memory modules (in case of 16GB DIMM, up to 6 memory modules) can be installed per CPU. Additional CPU is required to install more memory modules.

Page 150: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b)

CPU Blade Configuration Maximum memory capacity

Usable memory capacity may change depending on the basic architecture (x86 architecture) and support OS.

OSMaximum of the OS Support Memory:

Biggest available memory capacity(including for OS application)

Microsoft Windows Server 2003 R2, Standard EditionMicrosoft Windows Server 2008 Standard 4GB

* HW-DEP disabled : About 3.3GB

* HW-DEP enabled : 4GBExecute Disable Bit (XD Bit) is enabled by

default.Microsoft Windows Server 2003 R2, Enterprise EditionMicrosoft Windows Server 2008 Enterprise 64GB 64GB

Microsoft Windows Server 2003 R2, Standard x64Microsoft Windows Server 2008 Standard (x64)Microsoft Windows Server 2008 R2 Standard

32GB 32GB

Microsoft Windows Server 2003 R2, Enterprise x64 1TB 192GBMicrosoft Windows Server 2008 Enterprise (x64)Microsoft Windows Server 2008 R2 Enterprise 2TB 192GB

RedHat Enterprise Linux 5RedHat Enterprise Linux 5 Advanced Platform 16GB 16GB

RedHat Enterprise Linux 5(EM64T)RedHat Enterprise Linux 5 Advanced Platform(EM64T) 256GB 192GBRedHat Enterprise Linux 5 Advanced Platform(EM64T)

Vmware ESXi

256GB or 1TBMaximum

memory of virtual machine:255GB

192GB

Page 151: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

B120b-h

SSD

*1:N8400-099 support 1 slot only* Non hot-pluggable

SSD SlotSATA interface

* 100GB SSD [N8450-703] (2.5 inch SSD)

SSD

SSDUp to 2 *1

CPU Blade (Express5800/B120b-h)

CPU Blade Configuration

R2x64R22008x642008 EL5x64 EL5 2008R2

On-board disk array function (LSI Embedded MegaRAID)・On-board disk array function is supported.

Windows Linux VMware

SATAInterface

Non-RAID OK OK OK *3

RAID OK *2 NG NG

Controller function list

*2: Not supported by N8400-099

・On-board disk array function is supported.・By installing LSI Embedded MegaRAID system support software, internal SSD can be configured for mirroring

(RAID1) or striping(RAID0). Two identical size of SSD are required.

*3: Non-RAID mode only supported (BIOS setting change required)Disk monitoring by ESMPRO Agent and Express report service related modules is not possible.

Page 152: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-h)

CPU Blade ConfigurationB120b-h

USB option

Internal USB VMware ESXi 4.1 pre-installed USB flash memoryFor BTO only*VMware ESXi 4.1 base kit [N8403-038F]

Notes for ordering VMware ESXi 4.1 base kit

* This product may not be available in some regions. Please contact an NEC sales representative to confirm if it isavailable in your country.

* VMware ESXi 4.1 Single Server preinstalled in USB Flash memory (USB Flash memory is connected to internal USBinterface)

* This USB Flash memory cannot be used for other purposes than VMware ESXi 4.1.* It is not supported to install VMware ESXi 4.1 in HDD nor SSD.* NEC ESMPRO Agent cannot be installed in VMware ESXi 4.1. As a substitute, EXPRESSSCOPE Engine has a function

to send alerts to NEC ESMPRO Manager directly, not via NEC ESMPRO Agent.* The VMware ESXi 4.1 recovery CD is included.* NUMA(Non-Uniform Memory Access) function in the BIOS setup menu must be enabled.

ESXi4.1

Notes for ordering VMware ESXi 4.1 base kit* VMware ESXi 4.1 base kit (N8403-038F) is a BTO option.* When 2 CPU configuration, at least one memory needs to be connected to each CPU.* This USB Flash memory cannot be used for other purposes than VMware ESXi 4.1.* Server switching function (N+1 recovery) of SigmaSystemCenter is not supported.

Page 153: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-h)

Standard LAN port

Standard LANInterface

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-H v2/-M

Mezzanine Slot 1 (Type-I only)

(1) Adding a LAN I/F

1GbE LAN interface card

• By installing additional LAN interfaces or Fibre Channel interfaces in the mezzanine slots of the CPU Blade, up to 8 LAN ports or 4 fibre channel ports become available.• Additional LAN interface supports AFT and ALB. But you cannot configure a AFT and ALB team among multiple LAN interfaces.

To Connect Switch Modules to LAN,

CPU Blade Configuration

Standard LAN interface• The standard LAN interface supports AFT and ALB. You cannot use the standard LAN interface and an optional LAN board in a team to configure AFT and ALB. • The standard LAN interface supports the Remote Wake-Up function. Optional LAN boards do not support this function.

R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

1GbE LAN interface card* 1000BASE-T(2ch) adapter [N8403-017] *1* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021]

Mezzanine slot 1 See “Switch Module Slot” of SIGMABLADE-Hv2/-M

*1: The vIO control function is not supported.

(2) Adding a 10GBASE-KR I/F

10GBASE-KR LAN interface card* 10GBASE-KR(2ch) adapter [N8403-035]Mezzanine slot 1

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

* Can be connected to 10GbE Pass-Through card and 10Gb Intelligent L3 Switch (N8406-051F) only.10BASE-SR SFP+ modules should be installed to the 10GbE Pass-Through card.

(3) Adding a Fibre channel I/F (4Gbps)

Mezzanine slot 1To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

4Gb Fibre channel interface card* Fibre Channel Controller(4Gbps/2ch) [N8403-018]

(4) Adding a Fibre channel I/F (8Gbps)

Mezzanine slot 1To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

8Gb Fibre channel interface card* Fibre Channel Controller(8Gbps/2ch)[N8403-034]

R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

Page 154: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-h)

Mezzanine Slot 2 (Type I or Type II)

(1) Adding a 2ch LAN I/F (1Gbps Ether)

1GbE LAN interface card* 1000BASE-T(2ch) adapter [N8403-017] *1* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021]

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 2

By installing additional LAN interfaces or Fibre Channel interfaces in the mezzanine slots of the CPU Blade,up to 8 LAN ports or 4 fibre channel ports become available. Additional LAN interface supports AFT and ALB. But you cannot configure a AFT and ALB team among multiple LAN interfaces.

CPU Blade Configuration

(2) Adding a 4ch LAN I/F (1Gbps Ether)

1GbE LAN interface card* 1000BASE-T(4ch) adapter [N8403-020] *1 *2* 1000BASE-T(4ch) adapter (for iSCSI) [N8403-022] *2

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 2

*1: The vIO control function is not supported.*2: When you use this Adapter with SIGMABLADE-M, install the 1Gb Intelligent Switch in the switch module slots5 and 6. You cannot install the 1Gb Pass-Through Card in slots 5 and 6.When you use this Adapter with SIGMABLADE-H v2, install the 1Gb Intelligent Switch or 1Gb Pass-Through Card in the switch module slots 5,6,7 and 8.

*1: The vIO control function is not supported.

R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

(3) Adding a 10GBASE-KR I/F

10GBASE-KR LAN interface card* 10GBASE-KR(2ch) adapter [N8403-035] Mezzanine slot 2

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

* Can be connected to 10GbE Pass-Through card and 10Gb Intelligent L3 Switch (N8406-051F) only.10BASE-SR SFP+ modules should be installed to the 10GbE Pass-Through card.

(4) Adding a Fibre channel IF (4Gbps)

To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 24Gb Fibre channel interface card* Fibre Channel Controller(4Gbps/2ch) [N8403-018]

(5) Adding a Fibre channel IF (8Gbps)

To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 28Gb Fibre channel interface card* Fibre Channel Controller(8Gbps/2ch) [N8403-034]

R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

Page 155: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-h)

iSCSI Boot

(1) Adding 2ch LAN/IF (for iSCSI boot)

* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021]Mezzanine slot 1

By installing an additional LAN/IF supporting iSCSI Boot, iSCSI Boot become available.

CPU Blade Configuration

(2) Adding 2ch LAN/IF (for iSCSI boot)

* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021]Mezzanine slot 2

(3) Adding 2ch LAN/IF (for iSCSI boot)

* 1000BASE-T(4ch) adapter (for iSCSI) [N8403-022]Mezzanine slot 2

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-M

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-M

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-M

2008x6420082008R2

2008x6420082008R2

2008x6420082008R2

Notice for iSCSI boot* When you use iSCSI boot, dedicated network for storage device is necessary.* Only Windows Server 2008/2008R2 support iSCSI Boot.* Configurations of LAN driver and OS should be changed.* For the latest information of supported SAN boot, contact your sales representative.

Page 156: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

LAN

Following combinations of option cards and switch modules are supported.* For more details about switch modules, please refer the Blade Enclosure section of thisdocument.

FC

1Gb

Inte

lligen

t L2

Switc

h

1Gb

Inte

lligen

t L3

Switc

h

1:10

Gb

Inte

lligen

t L3

Sw

itch

1Gb

Pas

s-Th

roug

h C

ard

10G

b P

ass-

Thro

ugh

Car

d

10G

b In

tellig

ent L

3 Sw

itch

N84

06-0

22A

N84

06-0

23N

8406

-023

A

N84

06-0

44

N84

06-0

29

N84

06-0

36

N84

06-0

26

1000BASE-T (2ch) N8403-017 OK OK OK OK OK*1

4G F

C S

witc

h(12

port)

4G F

C S

witc

h(24

port)

8G F

C S

witc

h(12

port)

8G F

C S

witc

h(24

port)

FC P

ass-

Thro

ugh

Car

d

N84

06-0

19

N84

06-0

20

N84

06-0

40

N84

06-0

42

N84

06-0

30

SIGMABLADE H v2

CPU Blade (Express5800/B120b-h)

CPU Blade Configuration( Option card and Switch module )

1000BASE-T (2ch) N8403-017 OK OK OK OK OK*1

1000BASE-T (4ch) N8403-020 OK OK OK OK OK*1

1000BASE-T (2ch) (for iSCSI) N8403-021 OK OK OK OK OK*1

1000BASE-T (4ch) (for iSCSI) N8403-022 OK OK OK OK OK*1

10Gb-KR Standard LAN (B120b-h) OK OK OK OK OK10Gb-KR (2ch) N8403-035 OK10GbE Adapter (2ch) N8403-024 OK

*1 : 10G SFP+(N8406-037) is not supportedblank : not supported

N84

06-0

19

N84

06-0

20

N84

06-0

40

N84

06-0

42

N84

06-0

30

4G Fibre Channel controller(2ch) N8403-018 OK OK OK OK OK8G Fibre Channel controller(2ch) N8403-034 OK OK

LAN FC

SIGMABLADE M

1Gb

Inte

lligen

t L2

Switc

h

1Gb

Inte

lligen

t L3

Switc

h

1:10

Gb

Inte

lligen

t L3

Sw

itch

1Gb

Pas

s-Th

roug

h C

ard

10G

b Pa

ss-T

hrou

gh C

ard

10G

b In

tellig

ent L

3 S

witc

h

N84

06-0

22A

N84

06-0

23N

8406

-023

A

N84

06-0

44

N84

06-0

11

N84

06-0

35

N84

06-0

26

1000BASE-T (2ch) N8403-017 OK*2 OK*2 OK OK OK*1

1000BASE-T (4ch) N8403-020 OK OK OK OK OK*1

1000BASE-T (2ch) (for iSCSI) N8403-021 OK*2 OK*2 OK OK OK*1

1000BASE-T (4ch) (for iSCSI) N8403-022 OK OK OK OK OK*1

10Gb-KR Standard LAN (B120b-h) OK OK OK OK OK10Gb-KR (2ch) N8403-035 OK

*1 : 10G SFP+(N8406-037) is not supported*2 : 1Gb Interlink Expansion Card (N8406-013) is supportedblank : not supported

4G F

C S

witc

h(12

port)

8G F

C S

witc

h(12

port)

FC P

ass-

Thro

ugh

Car

d

N84

06-0

19

N84

06-0

40

N84

06-0

21

4G Fibre Channel controller(2ch) N8403-018 OK OK OK8G Fibre Channel controller(2ch) N8403-034 OK

Page 157: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-h)

CPU Blade Configuration

on-boardRAS chip

Server Management (EXPRESSSCOPE Engine 2)

* EXPRESSSCOPE Engine 2(provided as standard)

* Has a LAN port dedicated for remote management (No expansion slot required)

B120b-h comes standard with the "EXPRESSSCOPE Engine 2" remote management chip.All of the above management functions are enabled by default.

[supplement] Server management function in B120b-hEXPRESSSCOPE

Engine 2

Monitoring serverTemperature /HDD/ fan / electric power / voltage monitoring,including the degeneracy monitoring (CPU/ memory)

Collecting hardware event log

Stall monitoring Monitor booting, BIOS/POST stall, OS stall, shutdown

* When H/W remote KVM console function is used, color number is degraded to 65,536 with the resolution of 1280x1024. For more details, see "Server Management" in the Technical Guide.

* All of the above management functions are enabled regardless of OS status.* Some functions depend on the configuration. For more details, see "Server Management" in the Technical Guide.

Stall monitoring /Automatic reboot Monitor booting, BIOS/POST stall, OS stall, shutdown

AlertingHW err, Boot err and OS panic (by SNMP,E-Mail)

HW err, boot err, and OS panic (via COM port (modem))

Remote Console(via COM port/LAN)

POST/BIOS setup, DOS utility

Panic screen, Boot screen

CUI screen (OS console)

GUI screen (OS console)

Remote controlling(via COM port/LAN)

Remote reset/power on-off/dump

OS shutdown

Remote media (CD/DVD, FD, Flash) (via LAN)

CLP (Command Line Protocol, DMTF compliant)

Remote control via Web browser (not required dedicated AP)

Remote batch

Scheduling (not requiring UPS)

Maintenance Remote boot (PXE boot), maintenance utility

Others Automatic IP address setting via DNS/DHCP

Remote wakeup Wake On LAN, Wake On Ring

Group management Monitoring/controlling by the group

Industry standard IPMI 2.0

Page 158: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Memory Installation Guide

Memory Installation Guide (Registered memory)

1CPU (Maximum of 9)

2CPU (maximum of 18)The memory installation sequence will differ from that of 1CPU

This server adopts new architecture QPI (serial transfer architecture). Memory installation rule for this architecture is different from that for previous architecture FSB (parallel transfer architecture).

•Memory controller is integrated into CPU. The number of installable DIMM depends on the number of CPU.•This installation guide is targeted for multi-core/multi-task application.•1CPU configuration supports 2, 3 way interleave. 2CPU configuration supports 2, 4, 6 way interleave.

* Memory interleave is a technology to improve performance by accessing multiple memory bank at a time.

Memory installation sequence is determined. Installation sequence shall match the numbers indicated in these illustrations according to its capacity from large to small.* When using VMware ESX/ESXi, at least one memory needs to be connected to each CPU.

CPU Blade (Express5800/B120b-h)

CPU 1CPU 19

7

8

4

6

5

1

3

2

CPU 2CPU 2

14

18

16CPU 1CPU 117

13

15

7

11

9

12

8

10

1

5

3

6

2

4

Memory interleave

• This system uses Independent Channel mode to increase memory bandwidth by installing multiple DIMMs in different channels. Memory interleave is used to increase memory access speed.

• BIOS recognizes memory configuration and enable memory interleave. Some memory areas are not interleaved if they do not allow memory interleave.

<Interleaving in single-processor configuration>

See next page for DIMM configuration examples to enable memory interleave.

6Wayインタリーブで動作 [NUMA(Non-Uniform Memory Access)機能がONの場合はCPUごとに3Wayインタリーブで動作]

<Interleaving in dual-processor configuration>

connected to each CPU.

CPU 1CPU 1

2GB 2GB

2GB

2GBCPU 2CPU 2

4GB

2GB

The first 2GB of the 4GB DIMM and the 2GB DIMM in the other channel are 2-way interleaved. The second 2GB of the 4GB DIMM is not interleaved.

2-way interleaved

CPU 2CPU 2CPU 1CPU 1

2GB

2GB

2GB

2GB

2GB

2GB

6-way interleaved*3-way interleaved for each processor when NUMA is activated on the BIOS setup menu.

Page 159: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

DIMM configuration examples for Memory Interleave (1-processor configuration)• Memory is 2 or 3-way interleaved in single processor configuration. The interleave modes vary by memory configuration. • To increase memory access speed, use memory interleave available configuration.

CPU Blade (Express5800/B120b-h)

2Way 2Way+3Way 3Way4GB 2GB DIMM x 2 - -6GB - - 2GB DIMM x 3

2GB DIMM x 44GB DIMM x 2

10GB - 2GB DIMM x 5 -

2GB DIMM x 64GB DIMM x 3

16GB 8GB DIMM x 2 2GB DIMM x 4 +4GB DIMM x 2

-

18GB - - 2GB DIMM x 3 +4GB DIMM x 3

20GB 4GB DIMM x 5 -

4GB DIMM x 68GB DIMM x 3

30GB - - 2GB DIMM x 3 +8GB DIMM x 3

32GB 16GB DIMM x 2 4GB DIMM x 4 +8GB DIMM x 2

-

-36GB - 4GB DIMM x 3 +8GB DIMM x 3

-12GB -

24GB - -

Size Memory Interleave Mode

-8GB -

40GB 8GB DIMM x 5 -8GB DIMM x6

16GB DIMM x 3

54GB - - 2GB DIMM x 3 +16GB DIMM x 3

60GB - - 4GB DIMM x 3 +16GB DIMM x 3

8GB DIMM x 4 +16GB DIMM x 28GB DIMM x 2 +16GB DIMM x 3

72GB - 8GB DIMM x 3 +16GB DIMM x 3

80GB - 8GB DIMM x2 +16GB DIMM x 4

-

96GB - - 16GB DIMM x 6

64GB - -

48GB - -

Page 160: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

DIMM configuration examples for Memory Interleave (2-processor configuration)• Memory is 2, 4, or 6-way interleaved in dual processor configuration. The interleave modes vary by memory configuration. • The table shows DIMM configuration examples for memory interleave when NUMA (Non-Uniform Memory Access) is disabled by default. When NUMA is used, memory interleave is enabled for each processor. See the DIMM configuration examples for 1-processor configuration on the previous page.•To increase memory access speed, use 4-way, 2+6-way, 4+6-way, or 6-way interleave.

CPU Blade (Express5800/B120b-h)

2Way 4Way 2+6Way/4+6Way 6Way 2Way 4Way 2+6Way/

4+6Way 6Way8GB 4GB DIMM x 2 2GB DIMM x 4 - - 4GB DIMM x 12

12GB - - - 2GB DIMM x 6 8GB DIMM x 6

2GB DIMM x 4+

4GB DIMM x 252GB - -

2GB DIMM x 2+

8GB DIMM x 6-

2GB DIMM x 84GB DIMM x 2

+8GB DIMM x 6

4GB DIMM x 4+

2GB DIMM x 2

2GB DIMM x 48GB DIMM x 6

2GB DIMM x 6+

4GB DIMM x 2

4GB DIMM x 6+

8GB DIMM x 4

2GB DIMM x 10 60GB - - -2GB DIMM x 6

+8GB DIMM x 6

2GB DIMM x 12 - 16GB DIMM x 4 8GB DIMM x 8 -

4GB DIMM x 6 - -4GB DIMM x 4

+8GB DIMM x 6

-

2GB DIMM x 2+ - -

8GB DIMM x 4+ -

-

-

Memory Interleave Mode Memory Interleave Mode

- -

-

4GB DIMM x 4

48GB

-

Size Size

16GB

24GB

20GB -

64GB

56GB

8GB DIMM x 2

- -

--

- -

+4GB DIMM x 6

- - +16GB DIMM x 2

-

2GB DIMM x 6+

4GB DIMM x 472GB - - -

4GB DIMM x 6+

8GB DIMM x 64GB DIMM x 8 8GB DIMM x 102GB DIMM x 4

+4GB DIMM x 6

8GB DIMM x 6+

16GB DIMM x 24GB DIMM x 4

+8GB DIMM x 2

8GB DIMM x 2+

16GB DIMM x 4

36GB - - -2GB DIMM x 6

+4GB DIMM x 6

8GB DIMM x 12

4GB DIMM x 10 16GB DIMM x 64GB DIMM x 6

+8GB DIMM x 2

100GB - - 2GB DIMM x 216GB DIMM x 6 -

4GB DIMM x 2+

8GB DIMM x 4108GB - - -

2GB DIMM x 6+

16GB DIMM x 6

120GB - - -4GB DIMM x 6

+16GB DIMM x 6

128GB - - 16GB DIMM x 8 -

144GB - - -8GB DIMM x 6

+16GB DIMM x 6

160GB - - 16GB DIMM x 10 -

192GB - - - 16GB DIMM x12

96GB

28GB

80GB

- -40GB

32GB 16GB DIMM x 2 -

- -

- -

-

-

-

-- -

8GB DIMM x 4

Page 161: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Memory clock (Registered)

[example] 800MH 800MH

CPU: Xeon X5680/X5650/E5645/L5640/L5630

16GB DIMM is installed

Memory clock: 1066MHz Memory clock: 1333MHz

No

Yes

Memory Clock will differ depending on CPU and presence of Additional Memory Module.

CPU Blade (Express5800/B120b-h)

Memory clock: 800MHz

4 DIMM per CPU or more

7 DIMM per CPU or more

No

No

Yes

Yes

[example] 800MHz 800MHz

Memory RAS Feature

To support memory RAS feature of memory mirroring and lockstep (x8 SDDC), RPQ is required. Contact ITGL for details.

CPU 2CPU 2CPU 1CPU 1

16GB 16GB

16GB 8GB

16GB 4GB

16GB

8GB

4GB 16GB

16GB

16GB

Page 162: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Express5800/B120b-d

CPU Blade (Express5800/B120b-d)

ICON The icons on the configuration chart stand for the supported OS as shown in the following table. For the request to use Linux OS,please also refer to the website

http://www.nec.com/global/prod/express/linux/index.html or contact a sales representative.

・・・ Supported・・・ Certified by Distributor

2008R2

2003R22003R2x64

2003x64

:Windows Server 2008 R2(x64)

:Windows Server 2003,x64 Edition:Windows Server 2003 R2:Windows Server 2003 R2,x64 Edition

20082008x64

:Windows Server 2008:Windows Server 2008(x64)

ESXi4.1 :VMware ESXi 4.1

2003 :Windows Server 2003(with SP1 or later) EL4 : Red Hat Enterprise Linux ES4/AS4EL4x64 : Red Hat Enterprise Linux ES4(EM64T)/AS4(EM64T)

EL5x64: Red Hat Enterprise Linux 5/AP5EL5: Red Hat Enterprise Linux 5(EM64T)/AP5(EM64T)

EL6x64: Red Hat Enterprise Linux 6EL6: Red Hat Enterprise Linux 6(x86_64)

Page 163: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-d)Express5800/B120b-d( base machine)

HDD-less Blade Server delivering up to 192GB memory

Features- Xeon- X5670 / L5640 processors- Memory: Max.192GB (DDR3)- 1000BASE-X ports x2 as standards

Express5800/B120b-dN8400-121F N8400-117F

CPUIntel Xeon Processor

low voltageL5640

Intel Xeon ProcessorX5670

2.26GHz 2.93GHz5.86GT/s 6.4GT/s

L3 cache 12MB2 (1) /6 core

Intel 64 SupportIntel Virtualization technology SupportEnhanced Intel SpeedStep technology SupportIntel Hyper-Threading technology SupportIntel Turbo Boost technology Support

Intel 5500Memory: DDR3-1066/1333 Registerd DIMM x4SDDC with the ECC, lockstep mode (x8SDDC), memory mirroring *1

Speed Registerd DIMM 1066MHz/1333MHzStandard None (mandatory option)Maximum Max. 192G byte (16GB x 12)

StorageDevice

InternalHDD

Standard Diskless(No internal disk drives available)

Slot 0[0]

Chipset

Intel QuickPath interconnect

Capacity

Model nameN-Code

Clock Frequency

Maximum (standard) / core per CPU

2.5" Disk Bays [open]

* A set of console devices are required for each system.* For configuration details, see the System Configuration Guide provided for the Blade Enclosure.

*1: RPQ is required to enable RAS functions for memory. In addition, all memory modules should be 2GB/4GB/8GB/16GB to use x4 SDDC function.*2: Service Pack2 (SP2) or later is required.*3: For the request to use Linux OS, please also refer to the website http://www.nec.com/global/prod/express/linux/index.html or regional sales support contact person in NEC/J ITPF Global Business Development Division.

Slot 0[0]Type-1 slot x1[1] , Type-2 slot x1[1] (Type-1 supported)

Displayfunction

integrated into the chipset (standard)

32MB640*480 (up to 16,770,000 colors), 800*600 (up to 16,770,000 colors),

1,024*768 (up to 16,770,000 colors), 1,280*1,024 (up to 16,770,000 colors)1000BASE-X(connected to Mid Plane) x2

SUV(Serial x1, VGA x1, USB x2) connector x1 (at the time of SUV cable connection)5kg

51.6*515.4*180.7SIGMABLADE-M , SIGMABLADE-H v2

Power 327W(DC) 414W(DC)3,809W / 3,887VA(AC) 4,600W / 4,694VA(AC)7,546W / 7,700VA(AC) 9,146W / 9,333VA(AC)

Operation time: During operation: 10 to 35C / 20 to 80% (Non-condensing)When stored: -10 to 55C / 20 to 80% (Non-condensing)

FCC, ICES-3/CSA C108.8, C-Tick, Taiwan BSMI, UL, CE Mark, CB Report, CCCNEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

OS

Windows

Microsoft Windows Server 2003 R2 Standard Edition *2 / Enterprise Edition *2Microsoft Windows Server 2003 R2 Standard x64 Edition *2 / Enterprise x64 Edition *2

Microsoft Windows Server 2008 Standard / EnterpriseMicrosoft Windows Server 2008 Standard (x64) / Enterprise (x64)

Microsoft Windows Server 2008 R2 Standard / Enterprise

Linux *3

Red Hat Enterprise Linux ES4 / Red Hat Enterprise Linux ES4(EM64T)Red Hat Enterprise Linux AS4 / Red Hat Enterprise Linux AS4(EM64T)

Red Hat Enterprise Linux 5 / Red Hat Enterprise Linux 5(EM64T)Red Hat Enterprise Linux Advanced Platform 5

Red Hat Enterprise Linux Advanced Platform 5(EM64T)

Vmware Vmware ESXi 4.1

per CPU Blade

Safety Standard

Weight (a maximum)

Graphic acceleratorMezzanine slot [open]

Size (WxDxH mm))

Interface

Accessories

Resolution (Max.)Video RAM

2.5" Disk Bays [open]

Temperature / humidity conditions

with SIGMABLADE-M

Supported Enclosure

with SIGMABLADE-H v2

Page 164: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-d)

Express5800/B120b-d Quick Sheet

Express5800/B120b-dCPU socket, Memory slot

Expansion Slot

Mezzanine slot 1

Following products are DDR3-1,333MHz Registered type

Memory (DDR3)

* CPU Kit (Xeon X5670(2.93GHz)) [N8401-052F]

CPU

Mezzanine slot 2

XeonX5670 / L5640

Additional Additional

Additional

Additional

Additional

Additional Additional

Mandatory

* When 2CPU are installed, it is recommended to install same number of DIMMs to each CPU. * 2CPU configuration is required to install 7 or more memory modules.

OpenAdditional

Additional

Additional

AdditionalInternal USB

Internal port

Following products are DDR3-1,333MHz Registered type* Additional 2GB memory module set [N8402-080F] (2GB x1)* Additional 4GB memory module set [N8402-081F] (4GB x1)* Additional 8GB memory module set [N8402-066F] (8GB x1)

Following products are DDR3-1,066MHz Registered type* Additional 16GB memory module set [N8402-057F] (16GB x1)

Mezzanine Cards for Mezzanine Slots

* CPU Kit (Xeon X5670(2.93GHz)) [N8401-052F]* Can be installed on N8400-117F

* CPU Kit (Xeon L5640(2.26GHz)) [N8401-053F]* Can be installed on N8400-121F

SIGMABLADE-H v2,MCPU Blade slot

* 1000BASE-T(2ch) adapter [N8403-017]* 1000BASE-T(4ch) adapter [N8403-020] *1* 1000BASE-T(2ch) adapter (iSCSI support) [N8403-021]* 1000BASE-T(4ch) adapter (iSCSI support) [N8403-022] *1* 10GbE(2ch) adapter (iSCSI support) [N8403-024] *1* 10GBASE-KR adapter [N8403-035]* Fibre Channel controller (4Gbps/2ch) [N8403-018]* Fibre Channel controller (8Gbps/2ch) [N8403-034]

*1: only installed in the type 2 mezzanine slot

NoteTo install new options, please update BIOS, firmware, driver and EM firmware.

Internal USB option* VMware ESXi 4.1 base kit [N8403-040F]

VMware ESXi 4.1 Standalone(2CPU) is pre-installed in the USB flash memory- For BTO installation only- The USB flash memory is for VMware ESXi only

Page 165: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-d)

Express5800/B120b-d basic configuration

* N8405-016BFSIGMABLADE-MDimensions:Width (mm): 484.8Depth (mm): 829Height (mm): 264.2

(6U)* Protruding objects included

* Intelligent Switch

* N8405-040AFSIGMABLADE-H v2

Dimensions:Width (mm): 483Depth (mm): 823Height (mm): 442

(10U)* Protruding objects included

* N8400-117F Express5800/B120b-d (6C/X5670)Xeon processor X5670(2.93GHz)x1, 12MB 3rd cache, QuickPath interconnect 6.4GT/s,Memory selectable (required selection from 2GB /4GB /8GB /16GB), Mezzanine slot x2,1000BASE-X port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

XeonX5670(2.93GHz)

MemoryECC

Max.192GB1000BASE-X

(on board)

*2

EXPRESSBUILDER

ESMPRO /ServerManager,Agent

* N8400-121F Express5800/B120b-d (6C/L5640)Xeon processor L5640(2.26GHz)x1, 12MB 3rd cache, QuickPath interconnect 5.86GT/s,Memory selectable (required selection from 2GB /4GB /8GB /16GB), Mezzanine slot x2,1000BASE-X port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

Xeon low voltageL5640(2.26GHz)

MemoryECC

Max.192GB1000BASE-X

(on board)

*2

EXPRESSBUILDER

ESMPRO /ServerManager,Agent

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

* Remote wake up functionAllows the CPU Blades to be powered on from a remote console via LAN.

* To keep cooling efficiency, any vacant slots in the Blade Enclosure must be covered with slot covers.

* Intelligent Switchor Pass-Through Cardis required

Page 166: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-d)

Express5800/B120b-d( base machine)

Front View

Power LEDPower Switch

STATUS LEDDump switch

Reset switchEther1 Link/Access LED

Ether2 Link/Access LED

ID Switch

ID LED

SUV connectorUSB, VGA

External View

Page 167: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-d)

CPU Blade ConfigurationB120b-d

x12

Not provided as standard. Up to 6 memory modules (96GB) can be installed per CPU. Additional CPU is required to install 7 or more memory modules. Up to 12 memory modules (192GB) can be installed in 2CPU configuration.For more details about the memory performance and RAS functions, please refer to the end of this document (RPQ is required to enable RAS functions.) For more details about the memory size supported by each OS, please refer to the next page of this document.

* 2GB Additional Memory module(2GB x 1) [N8402-080F]* 4GB Additional Memory module(4GB x 1) [N8402-081F]* 8GB Additional Memory module(8GB x 1) [N8402-066F]

CPU

CPU slot

x21 CPU is installed as standard. Up to 2 processors can be installed.

N8400-117F supports following CPU* CPU Kit (Xeon X5670(2.66GHz)) [N8401-052F]

N8400-121F supports following CPU* CPU Kit (Xeon L5640(2.26GHz)) [N8401-053F]

Memory (Registered type)Memory Slot

* 8GB Additional Memory module(8GB x 1) [N8402-066F]* 16GB Additional Memory module(16GB x 1) [N8402-057F]

Important* Up to 6 memory modules (96GB) can be installed per CPU.Additional CPU is required to install 7 or more memory modules.

OSMaximum of the OS Support Memory:

Biggest available memory capacity(including for OS application)

Microsoft Windows Server 2003 R2, Standard EditionMicrosoft Windows Server 2008 Standard 4GB

* HW-DEP disabled : About 3.3GB

* HW-DEP enabled : 4GBExecute Disable Bit (XD Bit) is enabled by

default.Microsoft Windows Server 2003 R2, Enterprise EditionMicrosoft Windows Server 2008 Enterprise 64GB 64GB

Microsoft Windows Server 2003 R2, Standard x64Microsoft Windows Server 2008 Standard (x64)Microsoft Windows Server 2008 R2 Standard

32GB 32GB

Microsoft Windows Server 2003 R2, Enterprise x64 1TB 192GBMicrosoft Windows Server 2008 Enterprise (x64)Microsoft Windows Server 2008 R2 Enterprise 2TB 192GB

RedHat Enterprise Linux 5RedHat Enterprise Linux 5 Advanced Platform 16GB 16GB

RedHat Enterprise Linux 5(EM64T)RedHat Enterprise Linux 5 Advanced Platform(EM64T) 256GB 192GB

Vmware ESXi

256GB or 1TBMaximum

memory of virtual machine:255GB

192GB

Page 168: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-d)

CPU Blade ConfigurationB120b-d

USB option

Internal USB VMware ESXi 4.1 pre-installed USB flash memoryFor BTO only*VMware ESXi 4.1 base kit [N8403-040F]

Notes for ordering VMware ESXi 4.1 base kit

* This product may not be available in some regions. Please contact an NEC sales representative to confirm if it isavailable in your country.

* VMware ESXi 4.1 Single Server preinstalled in USB Flash memory (USB Flash memory is connected to internal USBinterface)

* This USB Flash memory cannot be used for other purposes than VMware ESXi 4.1.* It is not supported to install VMware ESXi 4.1 in HDD nor SSD.* NEC ESMPRO Agent cannot be installed in VMware ESXi 4.1. As a substitute, EXPRESSSCOPE Engine has a function

to send alerts to NEC ESMPRO Manager directly, not via NEC ESMPRO Agent.* The VMware ESXi 4.1 recovery CD is included.* NUMA(Non-Uniform Memory Access) function in the BIOS setup menu must be enabled.

ESXi4.1

Notes for ordering VMware ESXi 4.1 base kit* VMware ESXi 4.1 base kit (N8403-040F) is a BTO option.* This USB Flash memory cannot be used for other purposes than VMware ESXi 4.1.* When 2 CPU configuration, at least one memory needs to be connected to each CPU.* External storage (iStorage) is required to install guest OS.

To connect iStorage, Fibre Channel controller (N8403-018/034) or 1000BASE-T/10GBASE-KR LAN card (N8403-017/020/021/022/035) are necessary. Please contact an NEC sales representative to confirm available storage devices.

* Server switching function (N+1 recovery) of SigmaSystemCenter is not supported.

Page 169: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-d)

Standard LAN portStandard LANInterface

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-H v2/-M

Mezzanine Slot 1 (Type-I only)

(1) Adding a LAN I/F

1GbE LAN interface card* 1000BASE-T(2ch) adapter [N8403-017] *1* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021]

Mezzanine slot 1

• By installing additional LAN interfaces or Fibre Channel interfaces in the mezzanine slots of the CPU Blade, up to 8 LAN ports or 4 fibre channel ports become available.• Additional LAN interface supports AFT and ALB. But you cannot configure a AFT and ALB team among multiple LAN interfaces.

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

CPU Blade Configuration

Standard LAN interface• The standard LAN interface supports AFT and ALB. You cannot use the standard LAN interface and an optional LAN board in a team to configure AFT and ALB. • The standard LAN interface supports the Remote Wake-Up function. Optional LAN boards do not support this function.

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021] v2/-M

*1: The vIO control function is not supported.

(2) Adding a 10GBASE-KR I/F

10GBASE-KR LAN interface card* 10GBASE-KR(2ch) adapter [N8403-035]Mezzanine slot 1

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

* Can be connected to 10GbE Pass-Through card and 10Gb Intelligent L3 Switch (N8406-051F) only.10BASE-SR SFP+ modules should be installed to the 10GbE Pass-Through card.

R2x64R22008x642008 EL5x64 EL5 2008R2

Notice* B120b-d does not have internal storage. Storage and I/O Blade AD106a or external storage are required. * To use external Fibre Channel storage, Fibre Channel controller must be installed into the Mezzanine slot 1 or 2 to use FC device.* If you use iSCSI boot, 1000BASE-T(2ch/4ch) adapter (iSCSI support) is required.

(3) Adding a Fibre channel I/F (4Gbps)

Mezzanine slot 1To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

4Gb Fibre channel interface card* Fibre Channel Controller(4Gbps/2ch) [N8403-018]

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

(4) Adding a Fibre channel I/F (8Gbps)

Mezzanine slot 1To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

8Gb Fibre channel interface card* Fibre Channel Controller(8Gbps/2ch)[N8403-034]

ESXi4

ESXi4

2008x6420082008 R2 R2 R2x64 EL5x64EL5 ESXi4

Page 170: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-d)

Mezzanine Slot 2 (Type I or Type II)

(1) Adding a 2ch LAN I/F (1Gbps Ether)

1GbE LAN interface card* 1000BASE-T(2ch) adapter [N8403-017] *1* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021]

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 2

By installing additional LAN interfaces or Fibre Channel interfaces in the mezzanine slots of the CPU Blade,up to 8 LAN ports or 4 fibre channel ports become available. Additional LAN interface supports AFT and ALB. But you cannot configure a AFT and ALB team among multiple LAN interfaces.

CPU Blade Configuration

(2) Adding a 4ch LAN I/F (1Gbps Ether)

1GbE LAN interface card* 1000BASE-T(4ch) adapter [N8403-020] *1 *2* 1000BASE-T(4ch) adapter (for iSCSI) [N8403-022] *2

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 2

*1: The vIO control function is not supported.*2: When you use this Adapter with SIGMABLADE-M, install the 1Gb Intelligent Switch in the switch module slots5 and 6. You cannot install the 1Gb Pass-Through Card in slots 5 and 6.When you use this Adapter with SIGMABLADE-H v2, install the 1Gb Intelligent Switch or 1Gb Pass-Through Card in the switch module slots 5,6,7 and 8.

*1: The vIO control function is not supported.

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

ESXi4

ESXi4

(4) Adding a 10GBASE-KR I/F

10GBASE-KR LAN interface card* 10GBASE-KR(2ch) adapter [N8403-035] Mezzanine slot 2

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

* Can be connected to 10GbE Pass-Through card and 10Gb Intelligent L3 Switch (N8406-051F) only.10GBASE-SR SFP+ modules should be installed to the 10GbE Pass-Through card.

R2x64R22008x642008 EL5x64 EL5 2008R2

(5) Adding a Fibre channel IF (4Gbps)

To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 24Gb Fibre channel interface card* Fibre Channel Controller(4Gbps/2ch) [N8403-018]

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

(6) Adding a Fibre channel IF (8Gbps)

To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 28Gb Fibre channel interface card* Fibre Channel Controller(8Gbps/2ch) [N8403-034]

ESXi4

ESXi4

2008x6420082008 R2 R2 R2x64 EL5x64EL5■ ■ ESXi4

(3) Adding a 10G LAN I/F

10Gbe LAN interface card・10GbE(2ch) adapter [N8403-024]

10GbE Intelligent L3 Switch connection only supportedMezzanine slot 2

* When installed in SIGMABLADE-H v2, two 10GbE Intelligent L3 Switches are required to use 2 ports.

R2 R2x64 EL4x64EL4 EL5x64EL5 ■ ■■ ■2008 R2■

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

Page 171: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-d)

iSCSI Boot

(1) Adding 2ch LAN/IF (for iSCSI boot)

* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021]Mezzanine slot 1

By installing an additional LAN/IF supporting iSCSI Boot, iSCSI Boot become available.

CPU Blade Configuration

(2) Adding 2ch LAN/IF (for iSCSI boot)

* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021]Mezzanine slot 2

(3) Adding 2ch LAN/IF (for iSCSI boot)

* 1000BASE-T(4ch) adapter (for iSCSI) [N8403-022]Mezzanine slot 2

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-M

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-M

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-M

2008x6420082008R2

2008x6420082008R2

2008x6420082008R2

Notice for iSCSI boot* When you use iSCSI boot, dedicated network for storage device is necessary.* Only Windows Server 2008/2008R2 support iSCSI Boot.* Configurations of LAN driver and OS should be changed.* For the latest information of supported SAN boot, contact your sales representative.

Page 172: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

LAN

Following combinations of option cards and switch modules are supported.* For more details about switch modules, please refer the Blade Enclosure section of thisdocument.

FC

1Gb

Inte

lligen

t L2

Switc

h

1Gb

Inte

lligen

t L3

Switc

h

1:10

Gb

Inte

lligen

t L3

Sw

itch

1Gb

Pas

s-Th

roug

h C

ard

10G

b Pa

ss-T

hrou

gh C

ard

10G

b In

tellig

ent L

3 S

witc

h

N84

06-0

22A

N84

06-0

23N

8406

-023

A

N84

06-0

44

N84

06-0

29

N84

06-0

36

N84

06-0

26

1000BASE-T (2ch) N8403-017 OK OK OK OK OK*1

4G F

C S

witc

h(12

port)

4G F

C S

witc

h(24

port)

8G F

C S

witc

h(12

port)

8G F

C S

witc

h(24

port)

FC P

ass-

Thro

ugh

Car

d

N84

06-0

19

N84

06-0

20

N84

06-0

40

N84

06-0

42

N84

06-0

30

SIGMABLADE H v2

CPU Blade (Express5800/B120b-d)

CPU Blade Configuration( Option card and Switch module )

1000BASE-T (2ch) N8403-017 OK OK OK OK OK*1

1000BASE-T (4ch) N8403-020 OK OK OK OK OK*1

1000BASE-T (2ch) (for iSCSI) N8403-021 OK OK OK OK OK*1

1000BASE-T (4ch) (for iSCSI) N8403-022 OK OK OK OK OK*1

Standard LAN (B120a/B120a-d/B120b/B120b-d) OK OK OK OK OK*1

10Gb-KR (2ch) N8403-035 OK10GbE Adapter (2ch) N8403-024 OK

*1 : 10G SFP+(N8406-037) is not supportedblank : not supported

N84

06-0

19

N84

06-0

20

N84

06-0

40

N84

06-0

42

N84

06-0

30

4G Fibre Channel controller(2ch) N8403-018 OK OK OK OK OK8G Fibre Channel controller(2ch) N8403-034 OK OK

LAN FC

SIGMABLADE M

1Gb

Inte

lligen

t L2

Switc

h

1Gb

Inte

lligen

t L3

Switc

h

1:10

Gb

Inte

lligen

t L3

Sw

itch

1Gb

Pass

-Thr

ough

Car

d

10G

b P

ass-

Thro

ugh

Car

d

10G

b In

tellig

ent L

3 Sw

itch

N84

06-0

22A

N84

06-0

23N

8406

-023

A

N84

06-0

44

N84

06-0

11

N84

06-0

35

N84

06-0

26

1000BASE-T (2ch) N8403-017 OK*2 OK*2 OK OK OK*1

1000BASE-T (4ch) N8403-020 OK OK OK OK OK*1

1000BASE-T (2ch) (for iSCSI) N8403-021 OK*2 OK*2 OK OK OK*1

1000BASE-T (4ch) (for iSCSI) N8403-022 OK OK OK OK OK*1

Standard LAN (B120a/B120a-d/B120b/B120b-d) OK OK OK OK OK*1

10Gb-KR (2ch) N8403-035 OK10GbE Adapter (2ch) N8403-024 OK

*1 : 10G SFP+(N8406-037) is not supported*2 : 1Gb Interlink Expansion Card (N8406-013) is supportedblank : not supported

4G F

C S

witc

h(12

port)

8G F

C S

witc

h(12

port)

FC P

ass-

Thro

ugh

Car

d

N84

06-0

19

N84

06-0

40

N84

06-0

21

4G Fibre Channel controller(2ch) N8403-018 OK OK OK8G Fibre Channel controller(2ch) N8403-034 OK

Page 173: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120b-d)

CPU Blade Configuration

on-boardRAS chip

Server Management (EXPRESSSCOPE Engine 2)

* EXPRESSSCOPE Engine 2(provided as standard)

* Has a LAN port dedicated for remote management (No expansion slot required)

B120b-d comes standard with the "EXPRESSSCOPE Engine 2" remote management chip.All of the above management functions are enabled by default.

[supplement] Server management function in B120b-dEXPRESSSCOPE

Engine 2

Monitoring serverTemperature /HDD/ fan / electric power / voltage monitoring,including the degeneracy monitoring (CPU/ memory)

Collecting hardware event log

Stall monitoring Monitor booting, BIOS/POST stall, OS stall, shutdown

* When H/W remote KVM console function is used, color number is degraded to 65,536 with the resolution of 1280x1024. For more details, see "Server Management" in the Technical Guide.

* All of the above management functions are enabled regardless of OS status.* Some functions depend on the configuration. For more details, see "Server Management" in the Technical Guide.

Stall monitoring /Automatic reboot Monitor booting, BIOS/POST stall, OS stall, shutdown

AlertingHW err, Boot err and OS panic (by SNMP,E-Mail)

HW err, boot err, and OS panic (via COM port (modem))

Remote Console(via COM port/LAN)

POST/BIOS setup, DOS utility

Panic screen, Boot screen

CUI screen (OS console)

GUI screen (OS console)

Remote controlling(via COM port/LAN)

Remote reset/power on-off/dump

OS shutdown

Remote media (CD/DVD, FD, Flash) (via LAN)

CLP (Command Line Protocol, DMTF compliant)

Remote control via Web browser (not required dedicated AP)

Remote batch

Scheduling (not requiring UPS)

Maintenance Remote boot (PXE boot), maintenance utility

Others Automatic IP address setting via DNS/DHCP

Remote wakeup Wake On LAN, Wake On Ring

Group management Monitoring/controlling by the group

Industry standard IPMI 2.0

Page 174: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Memory Installation Guide

Memory Installation Guide (Registered memory)

CPU 2CPU 2CPU 1CPU 11

3

5

2

4

6

CPU 1CPU 11

2

3 6Open

1CPU (Maximum of 6)

2CPU (maximum of 12)The memory installation sequence will differ from that of 1CPU

11

This server adopts new architecture QPI (serial transfer architecture). Memory installation rule for this architecture is different from that for previous architecture FSB (parallel transfer architecture).

•Memory controller is integrated into CPU. The number of installable DIMM depends on the number of CPU.•This installation guide is targeted for multi-core/multi-task application.•1CPU configuration supports 2, 3 way interleave. 2CPU configuration supports 2, 4, 6 way interleave.

* Memory interleave is a technology to improve performance by accessing multiple memory bank at a time.

Memory installation sequence is determined. Installation sequence shall match the numbers indicated in these illustrations according to its capacity from large to small.* When using VMware ESX/ESXi, at least one memory needs to be connected to each CPU.

CPU Blade (Express5800/B120b-d)

4

5

7

9

8

10

12

Memory interleave

• This system uses Independent Channel mode to increase memory bandwidth by installing multiple DIMMs in different channels. Memory interleave is used to increase memory access speed.

• BIOS recognizes memory configuration and enable memory interleave. Some memory areas are not interleaved if they do not allow memory interleave.

CPU 1CPU 12GB

2GB 2GB

2GB

CPU 2CPU 22GB

4GB

<Interleaving in single-processor configuration>

CPU 1CPU 1 CPU 2CPU 2

2GB

2GB

2GB

2GB

2GB

See next page for DIMM configuration examples to enable memory interleave.

2-way interleaved

6Wayインタリーブで動作 [NUMA(Non-Uniform Memory Access)機能がONの場合はCPUごとに3Wayインタリーブで動作] 2GB

<Interleaving in dual-processor configuration>

The first 2GB of the 4GB DIMM and the 2GB DIMM in the other channel are 2-way interleaved. The second 2GB of the 4GB DIMM is not interleaved.

6-way interleaved*3-way interleaved for each processor when NUMA is activated on the BIOS setup menu.

Page 175: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

DIMM configuration examples for Memory Interleave (1-processor configuration)• Memory is 2 or 3-way interleaved in single processor configuration. The interleave modes vary by memory configuration. • To increase memory access speed, use memory interleave available configuration.

CPU Blade (Express5800/B120b-d)

2Way 2Way+3Way 3Way4GB 2GB DIMM x 2 - -6GB - - 2GB DIMM x 3

2GB DIMM x 4 -4GB DIMM x 2 -

10GB - 2GB DIMM x 5 -2GB DIMM x 64GB DIMM x 3

16GB 8GB DIMM x 2 2GB DIMM x 4 +4GB DIMM x 2

-

18GB - - 2GB DIMM x 3 +4GB DIMM x 3

20GB - 4GB DIMM x 5 -4GB DIMM x 6

8GB DIMM x 3

30GB - - 2GB DIMM x 3 +8GB DIMM x 3

32GB 16GB DIMM x 2 4GB DIMM x 4 +8GB DIMM x 2 -

Size Memory Interleave Mode

-12GB -

24GB - -

8GB -

-36GB - 4GB DIMM x 3 +

40GB - 8GB DIMM x 5 -8GB DIMM x6

16GB DIMM x 3

54GB - - 2GB DIMM x 3 +16GB DIMM x 3

60GB - - 4GB DIMM x 3 +16GB DIMM x 3

8GB DIMM x 4 +16GB DIMM x 28GB DIMM x 2 +16GB DIMM x 3

72GB - - 8GB DIMM x 3 +16GB DIMM x 3

80GB - 8GB DIMM x2 +16GB DIMM x 4

-

96GB - - 16GB DIMM x 6

- -

64GB - -

-36GB -8GB DIMM x 3

48GB

Page 176: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

DIMM configuration examples for Memory Interleave (2-processor configuration)• Memory is 2, 4, or 6-way interleaved in dual processor configuration. The interleave modes vary by memory configuration. • The table shows DIMM configuration examples for memory interleave when NUMA (Non-Uniform Memory Access) is disabled by default. When NUMA is used, memory interleave is enabled for each processor. See the DIMM configuration examples for 1-processor configuration on the previous page.•To increase memory access speed, use 4-way, 2+6-way, 4+6-way, or 6-way interleave.

CPU Blade (Express5800/B120b-d)

2Way 4Way 2+6Way/4+6Way 6Way 2Way 4Way 2+6Way/4+6Way 6Way- 4GB DIMM x 12

- 8GB DIMM x 6

10GB - - - - 52GB - - 2GB DIMM x 2 +8GB DIMM x 6 -

- 4GB DIMM x 2 +8GB DIMM x 6

2GB DIMM x 6 2GB DIMM x 48GB DIMM x 6

14GB - - - - 4GB DIMM x 6 +8GB DIMM x 4

2GB DIMM x 4 +4GB DIMM x 2

60GB - - - 2GB DIMM x 6 +8GB DIMM x 6

2GB DIMM x 8 - 16GB DIMM x 4 8GB DIMM x 8 -

18GB - - - - - - 4GB DIMM x 4 +8GB DIMM x 6 -

4GB DIMM x 4 +2GB DIMM x 2 - - 8GB DIMM x 4 +

16GB DIMM x 2 -

2GB DIMM x 6 +4GB DIMM x 2

72GB - - - 4GB DIMM x 6 +8GB DIMM x 6

2GB DIMM x 10 8GB DIMM x 10

2GB DIMM x 12 8GB DIMM x 6 +16GB DIMM x 2

4GB DIMM x 6 8GB DIMM x 2 +16GB DIMM x 4

26GB - - - - 8GB DIMM x 122GB DIMM x 2 +4GB DIMM x 6 16GB DIMM x 6

2GB DIMM x 6 +4GB DIMM x 4

98GB - - - -

20GB -

24GB -

- -

8GB 4GB DIMM x 2 2GB DIMM x 4

8GB DIMM x 2 4GB DIMM x 4

12GB -

-

-

- -

-

-48GB -

-

- -

-- -

80GB

96GB

Size

-

-

Memory Interleave Mode Memory Interleave Mode

-

56GB-

64GB

-

28GB - - -

Size

16GB

4GB DIMM x 498GB - - - -

30GB - - - - 100GB - - 2GB DIMM x 216GB DIMM x 6 -

4GB DIMM x 8 102GB - - - -2GB DIMM x 4 +4GB DIMM x 6

108GB - - - 2GB DIMM x 6 +16GB DIMM x 6

4GB DIMM x 4 +8GB DIMM x 2

120GB - - - 4GB DIMM x 6 +16GB DIMM x 6

36GB - - - 2GB DIMM x 6 +4GB DIMM x 6

128GB - - 16GB DIMM x 8 -

4GB DIMM x 10 144GB - - - 8GB DIMM x 6 +16GB DIMM x 6

4GB DIMM x 6 +8GB DIMM x 2

160GB - - 16GB DIMM x 10 -

4GB DIMM x 2 +8GB DIMM x 4 192GB - - - 16GB DIMM x 12

-

16GB DIMM x 2 8GB DIMM x 4

-40GB

32GB -

-

Page 177: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Memory clock (Registered)

[example] 1066MH 1066MH

CPU: Xeon X5670/L5640

16GB DIMM is installed

Memory clock: 1066MHz Memory clock: 1333MHz

No

Yes

Memory Clock will differ depending on CPU and presence of Additional Memory Module.

CPU Blade (Express5800/B120b-d)

CPU 2CPU 2CPU 1CPU 116GB

8GB8GB

8GB

8GB4GB4GB2GB

[example] 1066MHz 1066MHz

Memory RAS Feature

To support memory RAS feature of memory mirroring and lockstep (x8 SDDC), RPQ is required. Contact ITGL for details.

Page 178: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Express5800/B120a-d

CPU Blade (Express5800/B120a-d)

ICON The icons on the configuration chart stand for the supported OS as shown in the following table. For the request to use Linux OS,please also refer to the website

http://www.nec.com/global/prod/express/linux/index.html or contact a sales representative.

・・・ Supported・・・ Certified by Distributor

2008R2

2003R22003R2x64

2003x64

:Windows Server 2008 R2(x64)

:Windows Server 2003,x64 Edition:Windows Server 2003 R2:Windows Server 2003 R2,x64 Edition

20082008x64

:Windows Server 2008:Windows Server 2008(x64)

ESXi4.1 :VMware ESXi 4.1

2003 :Windows Server 2003(with SP1 or later) EL4 : Red Hat Enterprise Linux ES4/AS4EL4x64 : Red Hat Enterprise Linux ES4(EM64T)/AS4(EM64T)

EL5x64: Red Hat Enterprise Linux 5/AP5EL5: Red Hat Enterprise Linux 5(EM64T)/AP5(EM64T)

EL6x64: Red Hat Enterprise Linux 6EL6: Red Hat Enterprise Linux 6(x86_64)

Page 179: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a-d)Express5800/B120a-d( base machine)

HDD-less Blade Server delivering up to 192GB memory

Features- Xeon- X5550 / E5504 / E5502 / L5520 processors- Memory: Max.192GB (DDR3)- 1000BASE-X ports x2 as standards

N8400-090F N8400-089F N8400-087F N8400-088F

CPUIntel Xeon Processor

low voltageL5520

Intel Xeon ProcessorX5550

Intel Xeon ProcessorE5504

Intel Xeon ProcessorE5502

2.26GHz 2.66GHz 2.0GHz 1.86GHz5.86GT/s 4.8GT/s

L3 cache 8MB 4MB2 (1) /4 core 2 (1) /2 core

Intel 64 SupportIntel Virtualization technology SupportEnhanced Intel SpeedStep technology SupportIntel Hyper-Threading technology Support -Intel Turbo Boost technology Support -

Intel 5500Memory: DDR3-1,066 Registerd DIMM x4SDDC with the ECC, lockstep mode (x8SDDC), memory mirroring *1

DDR3-1,333 Unbufferd DIMM x4SDDC with the ECC, lockstep mode (x8SDDC), memory mirroring *1Registerd DIMM 800MHz/1066MHz 800MHzUnbufferd DIMM 1066MHz 1066MHz/1333MHz 800MHzStandard None (mandatory option)Maximum Max. 192G byte (16GB x 12)

StorageDevice

InternalHDD

Standard Diskless(No internal disk drives available)

N-Code

Clock Frequency

Maximum (standard) / core per CPU

Chipset

Intel QuickPath interconnect

Capacity

Transferrate

* A set of console devices are required for each system.* For configuration details, see the System Configuration Guide provided for the Blade Enclosure.

*1: RPQ is required to enable RAS functions for memory. In addition, all memory modules should be 2GB/4GB/8GB/16GB to use x4 SDDC function.*2: For the request to use Linux OS, please also refer to the website http://www.nec.com/global/prod/express/linux/index.html or regional sales support contact person.

Device HDD (No internal disk drives available)Slot 0[0]

Type-1 slot x1[1] , Type-2 slot x1[1] (Type-1 supported)Displayfunction

integrated into the chipset (standard)

32MB640*480 (up to 16,770,000 colors), 800*600 (up to 16,770,000 colors),

1,024*768 (up to 16,770,000 colors), 1,280*1,024 (up to 16,770,000 colors)1000BASE-X(connected to Mid Plane) x2

SUV(Serial x1, VGA x1, USB x2) connector x1 (at the time of SUV cable connection)5.9kg

51.6*515.4*180.7SIGMABLADE-M , SIGMABLADE-H , SIGMABLADE-H v2

Power 343W(DC) 433W(DC) 319W(DC) 287W(DC)3,955W / 4,035VA(AC) 4,773W / 4,870VA(AC) 3,736W / 3,813VA(AC) 3,445W / 3,516VA(AC)7,840W / 8,000VA(AC) 9,495W / 9,689VA(AC) 7,399W / 7,550VA(AC) 6,810W / 6,949VA(AC)

Operation time: During operation: 10 to 35C / 20 to 80% (Non-condensing)When stored: -10 to 55C / 20 to 80% (Non-condensing)

NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)OS

Windows

Microsoft Windows Server 2003 R2 Standard Edition / Enterprise EditionMicrosoft Windows Server 2003 R2 Standard x64 Edition / Enterprise x64 Edition

Microsoft Windows Server 2008 Standard / EnterpriseMicrosoft Windows Server 2008 Standard (x64) / Enterprise (x64)

Microsoft Windows Server 2008 R2 Standard / Enterprise

Linux *2

Red Hat Enterprise Linux ES4 / Red Hat Enterprise Linux ES4(EM64T)Red Hat Enterprise Linux AS4 / Red Hat Enterprise Linux AS4(EM64T)

Red Hat Enterprise Linux 5 / Red Hat Enterprise Linux 5(EM64T)Red Hat Enterprise Linux Advanced Platform 5

Red Hat Enterprise Linux Advanced Platform 5(EM64T)

Vmware Vmware ESXi 4

per CPU Blade

Weight (a maximum)

Graphic acceleratorMezzanine slot [open]

Size (WxDxH mm))

Interface

Accessories

Resolution (Max.)Video RAM

2.5" Disk Bays [open]

Temperature / humidity conditions

with SIGMABLADE-M

Supported Enclosure

with SIGMABLADE-H v2

Page 180: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a-d)

Express5800/B120a-d Quick Sheet

Express5800/B120a-dCPU socket, Memory slot

Expansion Slot

Mezzanine slot 1

Following products are DDR3-1,066MHz Registered type

Memory (DDR3)

* CPU Kit (Xeon E5504(2.0GHz)) [N8401-036]

CPU

Mezzanine slot 2

XeonE5502 / E5504 /X5550 / L5520

Additional Additional

Additional

Additional

Additional

Additional Additional

Mandatory

* When 2CPU are installed, it is recommended to install same number of DIMMs to each CPU. * 2CPU configuration is required to install 7 or more memory modules.

OpenAdditional

Additional

Additional

AdditionalInternal USB

Internal port

Following products are DDR3-1,066MHz Registered typeEach product includes 1 memory module.* Additional 2GB memory module set [N8402-080F] (2GB x1)* Additional 4GB memory module set [N8402-081F] (4GB x1)* Additional 8GB memory module set [N8402-066F] (8GB x1)* Additional 16GB memory module set [N8402-057F] (16GB x1)

Following products are DDR3-1,333MHz Unbuffered typeEach product includes 2 memory module.* Additional 4GB memory module set [N8402-046] (2GB x2)

* Registered type and Unbuffered type cannot be mixed.

Mezzanine Cards for Mezzanine Slots

* CPU Kit (Xeon E5504(2.0GHz)) [N8401-036]* Can be installed on N8400-087F

* CPU Kit (Xeon E5502(1.86GHz)) [N8401-037]* Can be installed on N8400-088F

* CPU Kit (Xeon X5550(2.66GHz)) [N8401-038]* Can be installed on N8400-089F

* CPU Kit (Xeon L5520(2.26GHz)) [N8401-039]* Can be installed on N8400-090F

SIGMABLADE-H v2,MCPU Blade slot

* 1000BASE-T(2ch) adapter [N8403-017]* 1000BASE-T(4ch) adapter [N8403-020] *1* 1000BASE-T(2ch) adapter (iSCSI support) [N8403-021]* 1000BASE-T(4ch) adapter (iSCSI support) [N8403-022] *1* 10GbE(2ch) adapter [N8403-024] * 10GBASE-KR adapter [N8403-035]* Fibre Channel controller (4Gbps/2ch) [N8403-018]* Fibre Channel controller (8Gbps/2ch) [N8403-034]

*1: only installed in the type 2 mezzanine slot

NoteTo install new options, please update BIOS, firmware, driver and EM firmware.When B120a is installed in SIGMABLADE-H, GbE intelligent switch is required to use PXE boot function.

Internal USB option* VMware ESXi 4.1 base kit [N8403-040F]

VMware ESXi 4.1 Standalone(2CPU) is pre-installed in the USB flash memory- For BTO installation only- The USB flash memory is for VMware ESXi only

Page 181: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a-d)

Express5800/B120a-d basic configuration

* N8405-016BFSIGMABLADE-MDimensions:Width (mm): 484.8Depth (mm): 829Height (mm): 264.2

(6U)* Protruding objects included

* N8400-087F Express5800/B120a-d (4C/E5504)Xeon processor E5504(2.00GHz)x1, 4MB 3rd cache, QuickPath interconnect 4.8GT/sMemory selectable (required selection from 2GB /4GB /8GB /16GB), Mezzanine slot x2,1000BASE-X port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

XeonE5504(2.0GHz)

MemoryECC

Max.192GB1000BASE-X

(on board)

*2

* Intelligent Switch

* N8405-040AFSIGMABLADE-H v2

Dimensions:Width (mm): 483Depth (mm): 823Height (mm): 442

(10U)* Protruding objects included

* N8400-088F Express5800/B120a-d (2C/E5502)Xeon processor E5502(1.86GHz)x1, 4MB 3rd cache, QuickPath interconnect 4.8GT/sMemory selectable (required selection from 2GB /4GB /8GB /16GB), Mezzanine slot x2,1000BASE-X port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

XeonE5502(1.86GHz)

MemoryECC

Max.192GB1000BASE-X

(on board)

*2

EXPRESSBUILDER

ESMPRO /ServerManager,Agent

EXPRESSBUILDER

ESMPRO /ServerManager,Agent

* N8400-089F Express5800/B120a-d (4C/X5550)

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

* Remote wake up functionAllows the CPU Blades to be powered on from a remote console via LAN.

* To keep cooling efficiency, any vacant slots in the Blade Enclosure must be covered with slot covers.

* Intelligent Switchor Pass-Through Cardis required

* N8400-089F Express5800/B120a-d (4C/X5550)Xeon processor X5550(2.66GHz)x1, 8MB 3rd cache, QuickPath interconnect 6.4GT/s,Memory selectable (required selection from 2GB /4GB /8GB /16GB), Mezzanine slot x2,1000BASE-X port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

XeonX5550(2.66GHz)

MemoryECC

Max.192GB1000BASE-X

(on board)

*2

EXPRESSBUILDER

ESMPRO /ServerManager,Agent

* N8400-090F Express5800/B120a-d (4C/L5520)Xeon processor L5520(2.26GHz)x1, 8MB 3rd cache, QuickPath interconnect 5.86GT/s,Memory selectable (required selection from 2GB /4GB /8GB /16GB), Mezzanine slot x2,1000BASE-X port x2 (remote wake-up function supported)NEC EXPRESSBUILDER bundled(including NEC ESMPRO/Manager/Agent)

Xeon low voltageL5520(2.26GHz)

MemoryECC

Max.192GB1000BASE-X

(on board)

*2

EXPRESSBUILDER

ESMPRO /ServerManager,Agent

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

Page 182: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a-d)

Express5800/B120a-d( base machine)

Front View

Power LEDPower Switch

STATUS LEDDump switch

Reset switchEther1 Link/Access LED

Ether2 Link/Access LED

ID Switch

ID LED

SUV connectorUSB, VGA

External View

Page 183: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a-d)

CPU Blade ConfigurationB120a-d

x12

Not provided as standard. Up to 6 memory modules (96GB) can be installed per CPU. Additional CPU is required to install 7 or more memory modules. Up to 12 memory modules (192GB) can be installed in 2CPU configuration.For more details about the memory performance and RAS functions, please refer to the end of this document (RPQ is required to enable RAS functions.) For more details about the memory size supported by each OS,

CPU

CPU slot

x21 CPU is installed as standard. Up to 2 processors can be installed.

N8400-087F supports following CPU* CPU Kit (Xeon E5504(2.0GHz)) [N8401-036]

N8400-088F supports following CPU* CPU Kit (Xeon E5502(1.86GHz)) [N8401-037]

N8400-089F supports following CPU* CPU Kit (Xeon X5550(2.66GHz)) [N8401-038]

N8400-090F supports following CPU* CPU Kit (Xeon L5520(2.26GHz)) [N8401-039]

Memory (Registered type)Memory Slot

(RPQ is required to enable RAS functions.) For more details about the memory size supported by each OS, please refer to the next page of this document.

* 2GB Additional Memory module(2GB x 1) [N8402-080F]* 4GB Additional Memory module(4GB x 1) [N8402-081F]* 8GB Additional Memory module(8GB x 1) [N8402-066F]* 16GB Additional Memory module(16GB x 1) [N8402-057F]

Important* Registered type and Unbuffered type cannot be mixed.* Up to 6 memory modules (96GB) can be installed per CPU.Additional CPU is required to install 7 or more memory modules.

Page 184: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a-d)

CPU Blade ConfigurationB120a-d

x8

Memory Slot Not provided as standard. Up to 3 sets (12GB) can be installed per CPU. Additional CPU is required to install 4 sets or more.Up to 6 sets (24GB) can be installed in 2CPU configuration.For more details about the memory performance and RAS functions, please refer to the end of this document (RPQ is required to enable RAS functions.) For more details about the memory size supported by each OS, please refer to the table below.

* 4GB Additional Memory module(2GB x 2) [N8402-046]

Memory (Unbuffered type)

Important* Registered type and Unbuffered type cannot be mixed.* Up to 3 sets (12GB) can be installed per CPU.Additional CPU is required to install 4 sets or more.

Maximum of the Biggest available memory capacity

ESXi4EL4 EL4x64R2 R2x64 EL5x64EL52008x6420082008 R2

OSMaximum of the OS Support Memory:

Biggest available memory capacity(including for OS application)

Microsoft Windows Server 2003 R2, Standard EditionMicrosoft Windows Server 2008 Standard 4GB

* HW-DEP disabled : About 3.3GB

* HW-DEP enabled : 4GBExecute Disable Bit (XD Bit) is enabled by

default.Microsoft Windows Server 2003 R2, Enterprise EditionMicrosoft Windows Server 2008 Enterprise 64GB 64GB

Microsoft Windows Server 2003 R2, Standard x64Microsoft Windows Server 2008 Standard (x64)Microsoft Windows Server 2008 R2 Standard

32GB 32GB

Microsoft Windows Server 2003 R2, Enterprise x64 1TB 192GBMicrosoft Windows Server 2008 Enterprise (x64)Microsoft Windows Server 2008 R2 Enterprise 2TB 192GB

RedHat Enterprise Linux 5RedHat Enterprise Linux 5 Advanced Platform 16GB 16GB

RedHat Enterprise Linux 5(EM64T)RedHat Enterprise Linux 5 Advanced Platform(EM64T) 256GB 192GB

Vmware ESXi

256GB or 1TBMaximum

memory of virtual machine:255GB

192GB

Page 185: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a-d)

CPU Blade ConfigurationB120a-d

USB option

Internal USB VMware ESXi 4.1 pre-installed USB flash memoryFor BTO only*VMware ESXi 4.1 base kit [N8403-040F]

Notes for ordering VMware ESXi 4.1 base kit

* This product may not be available in some regions. Please contact an NEC sales representative to confirm if it isavailable in your country.

* VMware ESXi 4.1 Single Server preinstalled in USB Flash memory (USB Flash memory is connected to internal USBinterface)

* This USB Flash memory cannot be used for other purposes than VMware ESXi 4.1.* It is not supported to install VMware ESXi 4.1 in HDD nor SSD.* NEC ESMPRO Agent cannot be installed in VMware ESXi 4.1. As a substitute, EXPRESSSCOPE Engine has a function

to send alerts to NEC ESMPRO Manager directly, not via NEC ESMPRO Agent.* The VMware ESXi 4.1 recovery CD is included.* NUMA(Non-Uniform Memory Access) function in the BIOS setup menu must be enabled.

ESXi4.1

Notes for ordering VMware ESXi 4.1 base kit* VMware ESXi 4.1 base kit (N8403-040F) is a BTO option.* This USB Flash memory cannot be used for other purposes than VMware ESXi 4.1.* When 2 CPU configuration, at least one memory needs to be connected to each CPU.* External storage (iStorage) is required to install guest OS.

To connect iStorage, Fibre Channel controller (N8403-018/034) or 1000BASE-T/10GBASE-KR LAN card (N8403-017/020/021/022/035) are necessary. Please contact an NEC sales representative to confirm available storage devices.

* Server switching function (N+1 recovery) of SigmaSystemCenter is not supported.

Page 186: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a-d)

Standard LAN portStandard LANInterface

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-H v2/-M

Mezzanine Slot 1 (Type-I only)

(1) Adding a LAN I/F

1GbE LAN interface card* 1000BASE-T(2ch) adapter [N8403-017] *1* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021]

Mezzanine slot 1

• By installing additional LAN interfaces or Fibre Channel interfaces in the mezzanine slots of the CPU Blade, up to 8 LAN ports or 4 fibre channel ports become available.• Additional LAN interface supports AFT and ALB. But you cannot configure a AFT and ALB team among multiple LAN interfaces.

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

CPU Blade Configuration

Standard LAN interface• The standard LAN interface supports AFT and ALB. You cannot use the standard LAN interface and an optional LAN board in a team to configure AFT and ALB. • The standard LAN interface supports the Remote Wake-Up function. Optional LAN boards do not support this function.

*1: The vIO control function is not supported.

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2 ESXi4

*1: The vIO control function is not supported.

(2) Adding a 10GBASE-KR I/F

10GBASE-KR LAN interface card* 10GBASE-KR(2ch) adapter [N8403-035]Mezzanine slot 1

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

* Can be connected to 10GbE Pass-Through card and 10Gb Intelligent L3 Switch (N8406-051F) only.10GBASE-SR SFP+ modules should be installed to the 10GbE Pass-Through card.

R2x64R22008x642008 EL5x64 EL5 2008R2

Notice* B120a-d does not have internal storage. Storage and I/O Blade AD106a or external storage are required. * To use external Fibre Channel storage, Fibre Channel controller must be installed into the Mezzanine slot 1 or 2 to use FC device.* If you use iSCSI boot, 1000BASE-T(2ch/4ch) adapter (iSCSI support) is required.

(3) Adding a Fibre channel I/F (4Gbps)

Mezzanine slot 1To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

4Gb Fibre channel interface card* Fibre Channel Controller(4Gbps/2ch) [N8403-018]

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

(4) Adding a Fibre channel I/F (8Gbps)

Mezzanine slot 1To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

8Gb Fibre channel interface card* Fibre Channel Controller(8Gbps/2ch)[N8403-034]

ESXi4

ESXi4

ESXi42008x6420082008 R2 R2 R2x64 EL5x64EL5

* RHEL5 version should be RHEL5.4 or later.

Page 187: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a-d)

Mezzanine Slot 2 (Type I or Type II)

(1) Adding a 2ch LAN I/F (1Gbps Ether)

1GbE LAN interface card* 1000BASE-T(2ch) adapter [N8403-017] *1* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021]

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 2

By installing additional LAN interfaces or Fibre Channel interfaces in the mezzanine slots of the CPU Blade,up to 8 LAN ports or 4 fibre channel ports become available. Additional LAN interface supports AFT and ALB. But you cannot configure a AFT and ALB team among multiple LAN interfaces.

CPU Blade Configuration

(2) Adding a 4ch LAN I/F (1Gbps Ether)

1GbE LAN interface card* 1000BASE-T(4ch) adapter [N8403-020] *1, *2* 1000BASE-T(4ch) adapter (for iSCSI) [N8403-022] *1

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 2

*1: The vIO control function is not supported.*2: When you use this Adapter with SIGMABLADE-M, install the 1Gb Intelligent Switch in the switch module slots5 and 6. You cannot install the 1Gb Pass-Through Card in slots 5 and 6.When you use this Adapter with SIGMABLADE-H v2, install the 1Gb Intelligent Switch or 1Gb Pass-Through Card in the switch module slots 5,6,7 and 8.

*1: The vIO control function is not supported.

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

ESXi4

ESXi4

(3) Adding a 10Gbps LAN/IF

10GbE LAN interface card* 10GbE(2ch) adapter [N8403-024]

Only support 10Gb Intelligent L3 switch

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 2

* When you use this 10GbE Adapter (2ch) with SIGMABLADE-H v2, you need two 10GbE L3 Switches to use both of the two ports.

(4) Adding a 10GBASE-KR I/F

10GBASE-KR LAN interface card* 10GBASE-KR(2ch) adapter [N8403-035] Mezzanine slot 2

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

* Can be connected to 10GbE Pass-Through card and 10Gb Intelligent L3 Switch (N8406-051F) only.10GBASE-SR SFP+ modules should be installed to the 10GbE Pass-Through card.

R2x64R22008x642008 EL5x64 EL5 2008R2

(5) Adding a Fibre channel IF (4Gbps)

To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 24Gb Fibre channel interface card* Fibre Channel Controller(4Gbps/2ch) [N8403-018]

EL4x64EL4R2x64R22008x642008 EL5x64 EL5 2008R2

(6) Adding a Fibre channel IF (8Gbps)

To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 28Gb Fibre channel interface card* Fibre Channel Controller(8Gbps/2ch) [N8403-034]

ESXi4

ESXi4

ESXi42008x6420082008 R2 R2 R2x64 EL5x64EL5

2003R2 2003R2x64 EL4x64EL4 EL5x64EL5 ■ ■■ ■2008 R2■

* RHEL5 version should be RHEL5.4 or later.

Page 188: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a-d)

iSCSI Boot

(1) Adding 2ch LAN/IF (for iSCSI boot)

* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021]Mezzanine slot 1

By installing an additional LAN/IF supporting iSCSI Boot, iSCSI Boot become available.

CPU Blade Configuration

(2) Adding 2ch LAN/IF (for iSCSI boot)

* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021]Mezzanine slot 2

(3) Adding 2ch LAN/IF (for iSCSI boot)

* 1000BASE-T(4ch) adapter (for iSCSI) [N8403-022]Mezzanine slot 2

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-M

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-M

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-M

2008x6420082008R2

2008x6420082008R2

2008x6420082008R2

Notice for iSCSI boot* When you use iSCSI boot, dedicated network for storage device is necessary.* Only Windows Server 2008/2008R2 support iSCSI Boot.* Configurations of LAN driver and OS should be changed.* For the latest information of supported SAN boot, contact your sales representative.

Page 189: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

LAN

Following combinations of option cards and switch modules are supported.* For more details about switch modules, please refer the Blade Enclosure section of thisdocument.

FC

1Gb

Inte

lligen

t L2

Switc

h

1Gb

Inte

lligen

t L3

Switc

h

1:10

Gb

Inte

lligen

t L3

Sw

itch

1Gb

Pas

s-Th

roug

h C

ard

10G

b Pa

ss-T

hrou

gh C

ard

10G

b In

tellig

ent L

3 S

witc

h

N84

06-0

22A

N84

06-0

23N

8406

-023

A

N84

06-0

44

N84

06-0

29

N84

06-0

36

N84

06-0

26

4G F

C S

witc

h(12

port)

4G F

C S

witc

h(24

port)

8G F

C S

witc

h(12

port)

8G F

C S

witc

h(24

port)

FC P

ass-

Thro

ugh

Car

d

N84

06-0

19

N84

06-0

20

N84

06-0

40

N84

06-0

42

N84

06-0

30

SIGMABLADE H v2

CPU Blade (Express5800/B120a-d)

CPU Blade Configuration( Option card and Switch module )

1000BASE-T (2ch) N8403-017 OK OK OK OK OK*1

1000BASE-T (4ch) N8403-020 OK OK OK OK OK*1

1000BASE-T (2ch) (for iSCSI) N8403-021 OK OK OK OK OK*1

1000BASE-T (4ch) (for iSCSI) N8403-022 OK OK OK OK OK*1

Standard LAN (B120a/B120a-d/B120b/B120b-d) OK OK OK OK OK*1

10Gb-KR (2ch) N8403-035 OK10GbE Adapter (2ch) N8403-024 OK

*1 : 10G SFP+(N8406-037) is not supportedblank : not supported

N84

06-0

19

N84

06-0

20

N84

06-0

40

N84

06-0

42

N84

06-0

30

4G Fibre Channel controller(2ch) N8403-018 OK OK OK OK OK8G Fibre Channel controller(2ch) N8403-034 OK OK

LAN FC

SIGMABLADE M

1Gb

Inte

lligen

t L2

Switc

h

1Gb

Inte

lligen

t L3

Switc

h

1:10

Gb

Inte

lligen

t L3

Sw

itch

1Gb

Pas

s-Th

roug

h C

ard

10G

b P

ass-

Thro

ugh

Car

d

10G

b In

tellig

ent L

3 Sw

itch

N84

06-0

22A

N84

06-0

23N

8406

-023

A

N84

06-0

44

N84

06-0

11

N84

06-0

35

N84

06-0

26

1000BASE-T (2ch) N8403-017 OK*2 OK*2 OK OK OK*1

1000BASE-T (4ch) N8403-020 OK OK OK OK OK*1

1000BASE-T (2ch) (for iSCSI) N8403-021 OK*2 OK*2 OK OK OK*1

1000BASE-T (4ch) (for iSCSI) N8403-022 OK OK OK OK OK*1

Standard LAN (B120a/B120a-d/B120b/B120b-d) OK OK OK OK OK*1

10Gb-KR (2ch) N8403-035 OK10GbE Adapter (2ch) N8403-024 OK

*1 : 10G SFP+(N8406-037) is not supported*2 : 1Gb Interlink Expansion Card (N8406-013) is supportedblank : not supported

4G F

C S

witc

h(12

port)

8G F

C S

witc

h(12

port)

FC P

ass-

Thro

ugh

Car

d

N84

06-0

19

N84

06-0

40

N84

06-0

21

4G Fibre Channel controller(2ch) N8403-018 OK OK OK8G Fibre Channel controller(2ch) N8403-034 OK

Page 190: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

CPU Blade (Express5800/B120a-d)

CPU Blade Configuration

on-boardRAS chip

Server Management (EXPRESSSCOPE Engine 2)

* EXPRESSSCOPE Engine 2(provided as standard)

* Has a LAN port dedicated for remote management (No expansion slot required)

B120a-d comes standard with the "EXPRESSSCOPE Engine 2" remote management chip.All of the above management functions are enabled by default.

[supplement] Server management function in B120a-dEXPRESSSCOPE

Engine 2

Monitoring serverTemperature /HDD/ fan / electric power / voltage monitoring,including the degeneracy monitoring (CPU/ memory)

Collecting hardware event log

Stall monitoring Monitor booting, BIOS/POST stall, OS stall, shutdown

* When H/W remote KVM console function is used, color number is degraded to 65,536 with the resolution of 1280x1024. For more details, see "Server Management" in the Technical Guide.

* All of the above management functions are enabled regardless of OS status.* Some functions depend on the configuration. For more details, see "Server Management" in the Technical Guide.

Stall monitoring /Automatic reboot Monitor booting, BIOS/POST stall, OS stall, shutdown

AlertingHW err, Boot err and OS panic (by SNMP,E-Mail)

HW err, boot err, and OS panic (via COM port (modem))

Remote Console(via COM port/LAN)

POST/BIOS setup, DOS utility

Panic screen, Boot screen

CUI screen (OS console)

GUI screen (OS console)

Remote controlling(via COM port/LAN)

Remote reset/power on-off/dump

OS shutdown

Remote media (CD/DVD, FD, Flash) (via LAN)

CLP (Command Line Protocol, DMTF compliant)

Remote control via Web browser (not required dedicated AP)

Remote batch

Scheduling (not requiring UPS)

Maintenance Remote boot (PXE boot), maintenance utility

Others Automatic IP address setting via DNS/DHCP

Remote wakeup Wake On LAN, Wake On Ring

Group management Monitoring/controlling by the group

Industry standard IPMI 2.0

Page 191: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Memory Installation Guide

Memory Installation Guide (Registered memory)

CPU 2CPU 2CPU 1CPU 11

3

5

2

4

6

CPU 1CPU 11

2

3 6Open

1CPU (Maximum of 6)

2CPU (maximum of 12)The memory installation sequence will differ from that of 1CPU

11

This server adopts new architecture QPI (serial transfer architecture). Memory installation rule for this architecture is different from that for previous architecture FSB (parallel transfer architecture).

•Memory controller is integrated into CPU. The number of installable DIMM depends on the number of CPU.•This installation guide is targeted for multi-core/multi-task application.•1CPU configuration supports 2, 3 way interleave. 2CPU configuration supports 2, 4, 6 way interleave.

* Memory interleave is a technology to improve performance by accessing multiple memory bank at a time.

Memory installation sequence is determined. Installation sequence shall match the numbers indicated in these illustrations according to its capacity from large to small.

CPU Blade (Express5800/B120a-d)

4

5

7

9

8

10

12

Memory interleave

• This system uses Independent Channel mode to increase memory bandwidth by installing multiple DIMMs in different channels. Memory interleave is used to increase memory access speed.

• BIOS recognizes memory configuration and enable memory interleave. Some memory areas are not interleaved if they do not allow memory interleave.

CPU 1CPU 12GB

2GB 2GB

2GB

CPU 2CPU 22GB

4GB

<Interleaving in single-processor configuration>

CPU 1CPU 1 CPU 2CPU 2

2GB

2GB

2GB

2GB

2GB

See next page for DIMM configuration examples to enable memory interleave.

2-way interleaved

6Wayインタリーブで動作 [NUMA(Non-Uniform Memory Access)機能がONの場合はCPUごとに3Wayインタリーブで動作] 2GB

<Interleaving in dual-processor configuration>

The first 2GB of the 4GB DIMM and the 2GB DIMM in the other channel are 2-way interleaved. The second 2GB of the 4GB DIMM is not interleaved.

6-way interleaved*3-way interleaved for each processor when NUMA is activated on the BIOS setup menu.

Page 192: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

DIMM configuration examples for Memory Interleave (1-processor configuration)• Memory is 2 or 3-way interleaved in single processor configuration. The interleave modes vary by memory configuration. • To increase memory access speed, use memory interleave available configuration.

CPU Blade (Express5800/B120a-d)

2Way 2Way+3Way 3Way4GB 2GB DIMM x 2 - -6GB - - 2GB DIMM x 3

2GB DIMM x 4 -4GB DIMM x 2 -

10GB - 2GB DIMM x 5 -2GB DIMM x 64GB DIMM x 3

16GB 8GB DIMM x 2 2GB DIMM x 4 +4GB DIMM x 2 -

18GB - - 2GB DIMM x 3 +4GB DIMM x 3

20GB - 4GB DIMM x 5 -4GB DIMM x 68GB DIMM x 3

30GB - - 2GB DIMM x 3 +8GB DIMM x 3

32GB 16GB DIMM x 2 4GB DIMM x 4 +8GB DIMM x 2 -

Size Memory Interleave Mode

-12GB -

24GB - -

8GB -

-36GB - 4GB DIMM x 3 +8GB DIMM x 3

40GB - 8GB DIMM x 5 -8GB DIMM x6

16GB DIMM x 3

54GB - - 2GB DIMM x 3 +16GB DIMM x 3

60GB - - 4GB DIMM x 3 +16GB DIMM x 3

8GB DIMM x 4 +16GB DIMM x 28GB DIMM x 2 +16GB DIMM x 3

72GB - - 8GB DIMM x 3 +16GB DIMM x 3

80GB - 8GB DIMM x2 +16GB DIMM x 4

-

96GB - - 16GB DIMM x 6

- -

64GB - -

-36GB - 8GB DIMM x 3

48GB

Page 193: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

DIMM configuration examples for Memory Interleave (2-processor configuration)• Memory is 2, 4, or 6-way interleaved in dual processor configuration. The interleave modes vary by memory configuration. • The table shows DIMM configuration examples for memory interleave when NUMA (Non-Uniform Memory Access) is disabled by default. When NUMA is used, memory interleave is enabled for each processor. See the DIMM configuration examples for 1-processor configuration on the previous page.•To increase memory access speed, use 4-way, 2+6-way, 4+6-way, or 6-way interleave.

CPU Blade (Express5800/B120a-d)

2Way 4Way 2+6Way/4+6Way 6Way 2Way 4Way 2+6Way/4+6Way 6Way- 4GB DIMM x 12

- 8GB DIMM x 6

10GB - - - - 52GB - - 2GB DIMM x 2 +8GB DIMM x 6 -

12GB - - - 2GB DIMM x 6 4GB DIMM x 2 +8GB DIMM x 6

14GB - - - - 2GB DIMM x 48GB DIMM x 6

2GB DIMM x 4 +4GB DIMM x 2

4GB DIMM x 6 +8GB DIMM x 4

2GB DIMM x 8 60GB - - - 2GB DIMM x 6 +8GB DIMM x 6

18GB - - - - - 16GB DIMM x 4 8GB DIMM x 8 -4GB DIMM x 4 +2GB DIMM x 2 - - 4GB DIMM x 4 +

8GB DIMM x 6 -

2GB DIMM x 6 +4GB DIMM x 2 - - 8GB DIMM x 4 +

16GB DIMM x 2 -

2GB DIMM x 10 72GB - - - 4GB DIMM x 6 +8GB DIMM x 6

2GB DIMM x 12 8GB DIMM x 10

4GB DIMM x 6 8GB DIMM x 6 +16GB DIMM x 2

26GB - - - - 8GB DIMM x 2 +16GB DIMM x 4

2GB DIMM x 2 +4GB DIMM x 6 8GB DIMM x 12

2GB DIMM x 6 +4GB DIMM x 4 16GB DIMM x 6

Size Size

-

-

Memory Interleave Mode Memory Interleave Mode

--

-

-

-

-

-

80GB

64GB

-56GB

48GB

96GB -

-

-

4GB DIMM x 4

-

-

8GB 4GB DIMM x 2 2GB DIMM x 4 -

20GB

-

-

-

24GB

28GB -

-

- -

16GB 8GB DIMM x 2

4GB DIMM x 430GB - - - - 98GB - - - -

4GB DIMM x 8 100GB - - 2GB DIMM x 216GB DIMM x 6 -

2GB DIMM x 4 +4GB DIMM x 6

102GB - - - -

4GB DIMM x 4 +8GB DIMM x 2

108GB - - - 2GB DIMM x 6 +16GB DIMM x 6

36GB - - - 2GB DIMM x 6 +4GB DIMM x 6

120GB - - - 4GB DIMM x 6 +16GB DIMM x 6

4GB DIMM x 10 128GB - - 16GB DIMM x 8 -4GB DIMM x 6 +8GB DIMM x 2

144GB - - - 8GB DIMM x 6 +16GB DIMM x 6

4GB DIMM x 2 +8GB DIMM x 4 160GB - - 16GB DIMM x 10 -

192GB - - - 16GB DIMM x 12

32GB 16GB DIMM x 2 8GB DIMM x 4 -

- -40GB -

Page 194: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Memory clock (Registered)

[example] 800MH 800MH

CPU: Xeon E5502 or E5504 CPU: Xeon L5520, X5550, or X5570

8GB or 16GB Additional Memory Module is installed

Memory clock: 800MHz

Yes

Memory clock: 1066MHz

No

Yes

4 or more DIMMs areInstalled per processor

No

Memory Clock will differ depending on CPU and presence of Additional Memory Module.

CPU Blade (Express5800/B120a-d)

CPU 2CPU 2CPU 1CPU 18GB

4GB4GB

4GB

4GB4GB2GB2GB

[example] 800MHz 800MHz

Memory RAS Feature

To support memory RAS feature of memory mirroring and lockstep (x8 SDDC), RPQ is required. Contact ITGL for details.

To use x4 SDDC, replace all the memory with 2GB / 4GB / 8GB / 16GB memory.

4GB

2GB

2GB

2GB

CPU: Xeon L5520, X5550, or X5570

Page 195: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Memory Installation Guide

Memory Installation Guide (Unbuffered memory)

CPU 2CPU 2CPU 1CPU 11

2

3

1

2

3

CPU 1CPU 11

1

2 3Open

1CPU (Maximum of 4)

2CPU (maximum of 8)The memory installation sequence will differ from that of 1CPU

66

This server adopts new architecture QPI (serial transfer architecture). Memory installation rule for this architecture is different from that for previous architecture FSB (parallel transfer architecture).

•Memory controller is integrated into CPU. The number of installable DIMM depends on the number of CPU.•This installation guide is targeted for multi-core/multi-task application.•1CPU configuration supports 2, 3 way interleave. 2CPU configuration supports 2, 4, 6 way interleave.

* Memory interleave is a technology to improve performance by accessing multiple memory bank at a time.

Memory installation sequence is determined. Installation sequence shall match the numbers indicated in these illustrations.* When using VMware ESX/ESXi, at least one memory needs to be connected to each CPU.

CPU Blade (Express5800/B120a-d)

2

3

4

5

4

5

Memory interleave

• This system uses Independent Channel mode to increase memory bandwidth by installing multiple DIMMs in different channels. Memory interleave is used to increase memory access speed.

• BIOS recognizes memory configuration and enable memory interleave. Some memory areas are not interleaved if they do not allow memory interleave.

CPU 1CPU 12GB

2GB 2GB

2GB

<Interleaving in single-processor configuration>

CPU 1CPU 1 CPU 2CPU 2

2GB

2GB

2GB

2GB

2GB

See next page for DIMM configuration examples to enable memory interleave.

2-way interleaved

6Wayインタリーブで動作 [NUMA(Non-Uniform Memory Access)機能がONの場合はCPUごとに3Wayインタリーブで動作] 2GB

<Interleaving in dual-processor configuration>

6-way interleaved*3-way interleaved for each processor when NUMA is activated on the BIOS setup menu.

Page 196: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Memory clock (Unbuffered)

[example] 1066MH 1066MH

CPU: Xeon E5502 or E5504 CPU: Xeon L5520

4 or more DIMMs areInstalled per processor

Memory clock: 800MHz Memory clock: 1066MHz

No

Yes

Memory Clock will differ depending on CPU and presence of Additional Memory Module.

CPU Blade (Express5800/B120a-d)

CPU: Xeon X5550 or X5570

Memory clock: 1333MHz

CPU 2CPU 2CPU 1CPU 12GB

2GB2GB

2GB

2GB2GB2GB

[example] 1066MHz 1066MHz

Memory RAS Feature

To support memory RAS feature of memory mirroring and lockstep (x8 SDDC), RPQ is required. Contact ITGL for details.

CPU: Xeon X5550 or X5570

2GB

2GB

Page 197: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O Blade AD106a

Storage and I/O Blade

ICON The icons on the configuration chart stand for the supported OS as shown in the following table. For the request to use Linux OS,please also refer to the website

http://www.nec.com/global/prod/express/linux/index.html or contact a sales representative.

・・・ Supported・・・ Certified by Distributor

2008R2

2003R22003R2x64

2003x64

:Windows Server 2008 R2(x64)

:Windows Server 2003,x64 Edition:Windows Server 2003 R2:Windows Server 2003 R2,x64 Edition

20082008x64

:Windows Server 2008:Windows Server 2008(x64)

ESXi4.1 :VMware ESXi 4.1

2003 :Windows Server 2003(with SP1 or later) EL4 : Red Hat Enterprise Linux ES4/AS4EL4x64 : Red Hat Enterprise Linux ES4(EM64T)/AS4(EM64T)

EL5x64: Red Hat Enterprise Linux 5/AP5EL5: Red Hat Enterprise Linux 5(EM64T)/AP5(EM64T)

EL6x64: Red Hat Enterprise Linux 6EL6: Red Hat Enterprise Linux 6(x86_64)

Page 198: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O BladeAD106a

Storage and I/O Blade supporting up to 6 SAS/SATA-HDD

Features- Up to 6 SAS/SATA-HDD can be installed.- Equipped with 2 Mezzanine slot- 1000BASE-X ports x2 as standards

Storage and I/O Blade AD106aN8404-001F

Storagedevice

InternalHDD

Standard Diskless(up to 6 disks selected from SAS-HDD 73.2GB / 146.5GB / 300GB / 600GB / 900GB or SATA-HDD 160GB / 500GB/1TB)

Maximum 6TB(SATA-HDD 1TB x 6)Support

RAIDsupport

RAID 0,1,5,6,10,50

Slot 6[6]Type-1 slot x1[1] , Type-2 slot x1[1] (Type-1 supported)

1000BASE-X(connected to Mid Plane) x25kg

52 x 516 x 180SIGMABLADE-M, SIGMABLADE-H v2

Model nameN-Code

Size (WxDxH mm)

Mezzanine slot [open]InterfaceWeight (a maximum)

HotPlug

2.5" Disk Bays [open]

Supported Enclosure SIGMABLADE-M, SIGMABLADE-H v2240W(DC)

Operation time: During operation: 10 to 35C / 20 to 80% (Non-condensing)When stored: 10 to 55C / 20 to 80% (Non-condensing)

Temperature / humidity conditions

Supported EnclosurePower Consumption

Notes:Supported blades

* To connect to B140a-T, “coupler plate for AD106a” should be ordered separately.

○ : Supported - : Not supported*1 OS boot from AD106a not supported*2 Red Hat Enterprise Linux 6.0 (x86, x86_64) supported

B120b B120a B120b-h B120b-d B120a-d B140a-T

AD106a ○*1 *2 ○*1 ○*1 ○ ○ ○

Page 199: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O Blade

AD106a Quick Sheet

Storage and I/O Blade AD106aExpansion Slot

Mezzanine slot 1

Mezzanine slot 2

HDD Slot

Open Open Open Open Open Open

* 146.5 GB HDD [N8450-023] (2.5" SAS 10,000rpm, Carrier provided)* 300 GB HDD [N8450-024] (2.5" SAS 10,000rpm, Carrier provided)* 600 GB HDD [N8450-030] (2.5" SAS 10,000rpm, Carrier provided)* 900 GB HDD [N8450-031] (2.5" SAS 10,000rpm, Carrier provided)* 73.2GB HDD [N8450-025] (2.5" SAS 15,000rpm, Carrier provided)* 146.5GB HDD [N8450-026] (2.5" SAS 15,000rpm, Carrier provided)* 300GB HDD [N8450-038] (2.5" SAS 15,000rpm, Carrier provided)* 160GB HDD [N8450-028] (2.5" SATA 7,200rpm, Carrier provided) * 500GB HDD [N8450-029] (2.5" SATA 7,200rpm, Carrier provided)

Disk

Mezzanine Cards for Mezzanine Slots

SIGMABLADE-H, H v2,MCPU Blade slot

* 1000BASE-T(2ch) adapter [N8403-017] * 1000BASE-T(4ch) adapter [N8403-020] *1* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021] *2, *3* 1000BASE-T(4ch) adapter (for iSCSI) [N8403-022] *1,*2,*3* 10GBASE-KR(2ch) adapter [N8403-035] *3* Fibre Channel Controller(4Gbps/2ch) [N8403-018]* Fibre Channel Controller(8Gbps/2ch) [N8403-034] *3

Note: Mezzanine cards not supported on the connected CPU blade (installed in the next slot of Storage and I/O Blade) are not supported.*1:only installed in the type 2 mezzanine slot*2:Required to use vIO control function of EM card*3:Cannot be used with B140a-T

* 500GB HDD [N8450-029] (2.5" SATA 7,200rpm, Carrier provided) * 1TB HDD [N8450-037] (2.5" SATA 7,200rpm, Carrier provided)

* SAS and SATA HDD cannot be mixed* By using disk array function (RAID1,5,6,10,50), HotSwap (replacement during operation) is possible. Identical size of disk drives are required for RAID functions.* Different rotation speed Hard Disk Drives cannot be mixed.

Other options* coupler plate for AD106a [N8403-032]

* To connect to B140a-T.

NoteTo install new options, please update BIOS, FW, driver and EM firmware of the Blade Enclosure to the latest versions.

Page 200: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O Blade

AD106a basic configuration

* N8405-016BFSIGMABLADE-M

Dimensions:Width (mm): 484.8Depth (mm): 829Height (mm) : 264.2

(6U)* Protruding objects included

* Intelligent Switch

* N8404-001F Storage and I/O Blade AD106a2.5inch SAS/SATA-HDD support, HDD selection (SAS:73.2GB/146.5GB/300GB/600GB/900GB,

SATA:160GB/500GB/1TB), Mezzanine slot x2, 1000BASE-X x2 ports1000BASE-X

(on board)x2

* N8405-040AFSIGMABLADE-H v2

Dimensions:Width (mm): 483Depth (mm): 823Height (mm) : 442

(10U)* Protruding objects included

* N8403-032 coupler plate for AD106aTo connect to B140a-T.

EL4x64EL42003R2x642003R22008x642008 ■ ■ EL5x64 ■EL5 ■2008 R2■ EL6x64■EL6 ■

* Intelligent Switchor Pass-Through Cardis required

Page 201: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O Blade

AD106aNotes- This blade must be installed in the next slot of a CPU blade.- See the table below for the supported blades.

- Connection with a CPU Blade (B120a/B120b/B120a-d/B120b-d) should be made in each red frame slots combination shown below. About the connection with B140a-T, please refer to the next page.

- AD106a does not have Hot Plug feature. CPU Blade must be turned off to disconnect the AD106a.- To use AD106a, you may need to update EM firmware of the Blade Enclosure.(Confirm on the download page of NEC web site (http://www.nec.com/global/prod/express/))

B120b B120a B120b-h B120b-d B120a-d B140a-T

AD106a ○*1 *2 ○*1 ○*1 ○ ○ ○

○ : Supported - : Not supported*1 OS boot from AD106a not supported*2 Red Hat Enterprise Linux 6.0 (x86, x86_64) supported

Equipped position of SIGMABLADE-H v2(Except B140a-T)

Equipped position of SIGMABLADE-M

Slot(2)Slot (1)

Slot (4)Slot(3)

Slot(6)Slot (5)

Slot(8)Slot (7)

Slot(10)Slot(9)

Slot(12)Slot (11)

Slot(14)Slot(13)

Slot(16)Slot(15)

Slot(2)Slot(1)

Slot(4)Slot(3)

Slot(6)Slot(5)

Slot(8)Slot(7)

RAIDcontroller

ChipsetRAID

controller

CPU blade AD106a

PCI Express

Blade Enclosure Back Plane

PCI Express connection image of CPU blade and AD106a

Page 202: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O Blade

AD106a

Notice * Connection with a CPU Blade (B140a-T) should be made in each red frame slots combination shown below. * To connect one AD106a with a B140a-T, install the AD106a to the lower slot. Coupler plate for AD106a is required to install a slot blank kit over the AD106a.* When you install one AD106a with a B140a-T, you can not install a CPU blade over the AD106a.* When you connect two AD106a with a B140a-T, the number of mezzanine cards is limited. Please see the table below.* Linux and Vmware are not supported when you connect AD106a with a B140a-T.

Equipped position of SIGMABLADE-H v2 (with B140a-T)

Blade S

lot

Blade S

lot

Blade S

lot

Blade S

lot

Blade S

lot

Blade S

lot

Blade S

lot

Blade S

lot

Number of mezzanine cards when you connect two AD106a with one B140a-T

Type-1 card only Mixed with Type-2 cards

Type-1:1000BASE-T(2ch) boardUp to 7 cards Up to 5 cards Up to 3 cards Up to 1 card

Type-1:FC controller (4Gb/2ch)

Type-2:1000BASE-T(4ch) board - 1 2 3

Blade S

lot

1

Blade S

lot

2

Blade S

lot

3

Blade S

lot

4

Blade S

lot

5

Blade S

lot

6

Blade S

lot

7

Blade S

lot

9

Blade S

lot

10

Blade S

lot

11

Blade S

lot

12

Blade S

lot

13

Blade S

lot

14

Blade S

lot

15

Blade S

lot

8

Blade S

lot

16

Page 203: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O Blade

AD106a

Front View

(1) Hard Disk DriveUpper berth, from left, slot0,1,2Lower berth, from left, slot3,4,5

(2) Power LED(3) STATUS LED(4) LAN1 Link/Access LED(5) LAN2 Link/Access LED(6) ID LED(7) Ejecting lever

External View

Page 204: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O Blade

Storage and I/O Blade Configuration

AD106a

Hard Disk Drive

* 146.5 GB HDD [N8450-023] (2.5" SAS 10,000rpm, Carrier provided)* 300 GB HDD [N8450-024] (2.5" SAS 10,000rpm, Carrier provided)* 600 GB HDD [N8450-030] (2.5" SAS 10,000rpm, Carrier provided)* 900 GB HDD [N8450-031] (2.5" SAS 10,000rpm, Carrier provided)* 73.2GB HDD [N8450-025] (2.5" SAS 15,000rpm, Carrier provided) *1* 146.5GB HDD [N8450-026] (2.5“ SAS 15,000rpm, Carrier provided) *1

RAID controller(LSILogic SAS controller, 128MB cache, battery built-in)

HDDHDD HDD HDD HDD HDD

EL4x64EL42003R2x642003R22008x642008 ■ ■ EL5x64 ■EL5 ■2008 R2■ EL6x64■EL6 ■

* 146.5GB HDD [N8450-026] (2.5“ SAS 15,000rpm, Carrier provided) *1* 300GB HDD [N8450-038] (2.5“ SAS 15,000rpm, Carrier provided) *1* 160GB HDD [N8450-028] (2.5" SATA 7,200rpm, Carrier provided) * 500GB HDD [N8450-029] (2.5“ SATA 7,200rpm, Carrier provided) * 1TB HDD [N8450-037] (2.5“ SATA 7,200rpm, Carrier provided)

*1: To install these Hard Disk Drives to the blades in SIGMABLADE-M, following PSUs should not be used.N8405-023F Power Unit (2,250W, set of two)

* By using disk array function (RAID1,5,6,10,50), HotSwap (replacement during operation) is possible.Identical size of disk drives are required for RAID functions.

* Different rotation speed Hard Disk Drives cannot be mixed.* SAS and SATA HDD cannot be mixed.

Page 205: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O Blade

Standard LAN port

Standard LAN Interface

Mezzanine Slot 1 (Type-I only)

(1) Adding a LAN I/F

1GbE LAN interface card* 1000BASE-T(2ch) adapter [N8403-017] *1 *3Mezzanine slot 1

Storage and I/O Blade Configuration

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-H v2/-M

Standard LAN interface• The standard LAN interface enables AFT and ALB. But you cannot use the standard LANinterface and an optional LAN board in a team to configure AFT and ALB.

By installing additional LAN interfaces or Fibre Channel interfaces in the mezzanine slots of the AD106a,up to 8 LAN ports or 4 fibre channel ports become available. LAN interface card supports AFT/ALB teaming. But you cannot configure a AFT and ALB team among multiple LAN interfaces.Note: Mezzanine cards not supported on the connected CPU blade (installed in the next slot of Storage and I/O Blade) are not supported.

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-H

EL4x64EL42008x642008 ■ ■ EL5x64 ■EL5 ■2008 R2■ EL6x64■EL6 ■2003R2x642003R2

* 1000BASE-T(2ch) adapter [N8403-017] *1 *3* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021] *2 *3

Mezzanine slot 1 See “Switch Module Slot” of SIGMABLADE-Hv2/-M

*1: The vIO control function is not supported.*2: Not supported when connecting B140a-T*3: Connecting NEC Storage is not supported when the installed OS is RHEL6.

(2) Adding a 10GBASE-KR I/F

10GBASE-KR LAN interface card* 10GBASE-KR(2ch) adapter [N8403-035]Mezzanine slot 1

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

* Can be connected to 10GbE Pass-Through card and 10Gb Intelligent L3 Switch (N8406-051F) only.10GBASE-SR SFP+ modules should be installed to the 10GbE Pass-Through card.Not supported when connecting B140a-TConnecting NEC Storage is not supported when the installed OS is RHEL6.

(3) Adding a Fibre channel I/F (4Gbps)

Mezzanine slot 1To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

4Gb Fibre channel interface card* Fibre Channel Controller(4Gbps/2ch) [N8403-018]

(4) Adding a Fibre channel I/F (8Gbps)

Mezzanine slot 1To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

8Gb Fibre channel interface card *2 * Fibre Channel Controller(8Gbps/2ch)[N8403-034]

R2x64R22008x642008 EL5x64 EL5 2008R2

*2:Not supported when connecting B140a-T* RHEL5 version should be RHEL5.4 or later.

2008x642008 ■ ■ EL5x64 ■EL5 ■2008 R2■ EL6x64■EL6 ■2003R2x642003R2

EL4x64EL42008x642008 ■ ■ EL5x64■EL5 ■2008 R2■ 2003R2x642003R2

Page 206: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O Blade

Mezzanine Slot 2 (Type I or Type II)

(1) Adding a 2ch LAN I/F

1GbE LAN interface card* 1000BASE-T(2ch) adapter [N8403-017] *1 *3* 1000BASE-T(2ch) adapter (for iSCSI) [N8403-021] *2 *3

Mezzanine slot 2

Storage and I/O Blade Configuration

(2) Adding a 4ch LAN I/F

1GbE LAN interface card* 1000BASE-T(4ch) adapter [N8403-020] *1 *3 *4* 1000BASE-T(4ch) adapter (for iSCSI) [N8403-022] *2 *3 *4

Mezzanine slot 2

By installing additional LAN interfaces or Fibre Channel interfaces in the mezzanine slots of the AD106a,up to 8 LAN ports or 4 fibre channel ports become available. LAN interface card supports AFT/ALB teaming. But you cannot configure a AFT and ALB team among multiple LAN interfaces.Note: Mezzanine cards not supported on the connected CPU blade (installed in the next slot of Storage and I/O Blade) are not supported.

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

*1: The vIO control function is not supported.*2: Not supported when connecting B140a-T

*1: The vIO control function is not supported.*2: Not supported when connecting B140a-T*3: Connecting NEC Storage is not supported when the installed OS is RHEL6.

EL4x64EL42008x642008 ■ ■ EL5x64 ■EL5 ■2008 R2■ EL6x64■EL6 ■2003R2x642003R2

EL4x64EL42008x642008 ■ ■ EL5x64 ■EL5 ■2008 R2■ EL6x64■EL6 ■2003R2x642003R2

*2: Not supported when connecting B140a-T*3: When you use this Adapter with SIGMABLADE-M, install the 1Gb Intelligent Switch in the switch module slots5 and 6. You cannot install the 1Gb Pass-Through Card in slots 5 and 6.When you use this Adapter with SIGMABLADE-H v2, install the 1Gb Intelligent Switch or 1Gb Pass-Through Card in the switch module slots 5,6,7 and 8.*4 : Connecting NEC Storage is not supported when the installed OS is RHEL6.

(3) Adding a 10GBASE-KR I/F

10GBASE-KR LAN interface card* 10GBASE-KR(2ch) adapter [N8403-035]Mezzanine slot 2

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

* Can be connected to 10GbE Pass-Through card and 10Gb Intelligent L3 Switch (N8406-051F) only.10BASE-SR SFP+ modules should be installed to the 10GbE Pass-Through card.Not supported when connecting B140a-TConnecting NEC Storage is not supported when the installed OS is RHEL6.

(4) Adding a Fibre channel IF (4Gbps)

To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 24Gb Fibre channel interface card* Fibre Channel Controller(4Gbps/2ch) [N8403-018]

(5) Adding a Fibre channel IF (8Gbps)

To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 28Gb Fibre channel interface card *2* Fibre Channel Controller(8Gbps/2ch) [N8403-034]

R2x64R22008x642008 EL5x64 EL5 2008R2

*2: Not supported when connecting B140a-T* RHEL5 version should be RHEL5.4 or later.

2008x642008 ■ ■ EL5x64 ■EL5 ■2008 R2■ EL6x64■EL6 ■2003R2x642003R2

EL4x64EL42008x642008 ■ ■ EL5x64■EL5 ■2008 R2■ 2003R2x642003R2

Page 207: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O Blade AD106b

Storage and I/O Blade

ICON The icons on the configuration chart stand for the supported OS as shown in the following table. For the request to use Linux OS,please also refer to the website

http://www.nec.com/global/prod/express/linux/index.html or contact a sales representative.

・・・ Supported・・・ Certified by Distributor

2008R2

2003R22003R2x64

2003x64

:Windows Server 2008 R2(x64)

:Windows Server 2003,x64 Edition:Windows Server 2003 R2:Windows Server 2003 R2,x64 Edition

20082008x64

:Windows Server 2008:Windows Server 2008(x64)

ESXi4.1 :VMware ESXi 4.1

2003 :Windows Server 2003(with SP1 or later) EL4 : Red Hat Enterprise Linux ES4/AS4EL4x64 : Red Hat Enterprise Linux ES4(EM64T)/AS4(EM64T)

EL5x64: Red Hat Enterprise Linux 5/AP5EL5: Red Hat Enterprise Linux 5(EM64T)/AP5(EM64T)

EL6x64: Red Hat Enterprise Linux 6EL6: Red Hat Enterprise Linux 6(x86_64)

Page 208: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O BladeAD106b

Storage and I/O Blade supporting up to 6 HDD

Features-Up to 6 SAS/SATA-HDD, SATA-SSD can be installed.-Equipped with 2 Mezzanine slot-2x 1000BASE-X ports as standards

Storage and I/O Blade AD106b

Storagedevice

InternalHDD

Standard

Maximum 6TB(SATA-HDD 1TB x 6)StandardMaximum 600GB(SATA-SSD 100GB x 6)

SupportedRAIDsupport

RAID 0,1,5,6,10,50

Slot 6[6]Type-1 slot x1[1] , Type-2 slot x1[1] (Type-1 supported)

1000BASE-X(connected to Mid Plane) x25kg

N8404-003FDiskless

(up to 6 disks selected from SAS-HDD 146.5GB / 300GB / 450GB / 600GB / 900GB or SATA-HDD 160GB / 500GB / 1TB)

2.5" Disk Bays [open]

Diskless

Model nameN-Code

Mezzanine slot [open]InterfaceWeight (a maximum)

HotPlug

InternalSSD

5kg52 x 516 x 180

240W(DC)Operation time: During operation: 10 to 35C / 20 to 80% (Non-condensing)

When stored: 10 to 55C / 20 to 80% (Non-condensing)

SIGMABLADE-M , SIGMABLADE-H v2

Temperature / humidity conditions

Supported EnclosurePower Consumption

Size (WxDxH mm)Weight (a maximum)

Notes:Supported blades

○ : Supported - : Not supported*1 OS boot from AD106b not supported*2 Linux OS not supported*3 Red Hat Enterprise Linux 6.0 (x86, x86_64)

supported

B120b B120a B120b-h B120b-d B120a-d B140a-T

AD106b ○*1*3 ○*1*2 ○*1 ○ ○*2 ×

Page 209: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O Blade

Storage and I/O Blade AD106bExpansion Slot

Mezzanine slot1

Mezzanine slot2

HDD slot

Open Open Open Open Open Open

・300GB HDD [N8450-032] (2.5-inch SAS 10,000rpm, Carrier provided)・450GB HDD [N8450-033] (2.5-inch SAS 10,000rpm, Carrier provided)・600GB HDD [N8450-034] (2.5-inch SAS 10,000rpm, Carrier provided)・900GB HDD [N8450-035] (2.5-inch SAS 10,000rpm, Carrier provided)・146.5GB HDD [N8450-036] (2.5-inch SAS 15,000rpm, Carrier provided)・300GB HDD [N8450-039] (2.5-inch SAS 15,000rpm, Carrier provided)・160GB HDD [N8450-028] (2.5-inch SATA 7,200rpm, Carrier provided)・500GB HDD [N8450-029] (2.5-inch SATA 7,200rpm, Carrier provided)・1TB HDD [N8450-037] (2.5-inch SATA 7,200rpm, Carrier provided)

Disk

AD106b Quick Sheet

Mezzanine Cards for Mezzanine Slots

SIGMABLADE-H v2,M(CPU Blade slot)

* 1000BASE-T(2ch) adapter [N8403-017]* 1000BASE-T(4ch) adapter*1 [N8403-020]* 1000BASE-T(2ch) adapter(for iSCSI)*2 [N8403-021]* 1000BASE-T(4ch) adapter(for iSCSI)*1,*2 [N8403-022]* 10GBASE-KR adapter(2ch) [N8403-035]* Fibre Channel Controller (4Gbps/2ch) [N8403-018]* Fibre Channel Controller (8Gbps/2ch) [N8403-034]

Note: Mezzanine cards not supported on the connected CPU blade (installed in the next slot of Storage and I/O Blade) are not supported.*1:only installed in the type 2 mezzanine slot*2:Required to use vIO control function of EM card

・100GB SSD [N8450-702] (2.5-inch SATA, Carrier provided)

* SAS-HDD, SATA HDD and SSD cannot be mixed.* By using disk array function (RAID1,5,6,10,50), HotSwap (replacement during operation) is possible. Identical size of disk drives are required for RAID functions.* Different rotation speed Hard Disk Drives cannot be mixed.

NoteTo install new options, please update BIOS, FW, driver and EM firmware of the Blade Enclosure to the latest versions.

Page 210: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O Blade

・ N8405-016BFSIGMABLADE-M

DimensionsWidth (mm) :484.8Depth (mm) :829Height (mm) :264.2

(6U)* Protruding objects included

* Intelligent Switch

・ N8405-040AFSIGMABLADE-H v2

DimensionsWidth (mm) :483Depth (mm) :823Height (mm) :442

(10U)* Protruding objects included

・ N8404-003 Storage and I/O Blade AD106b2.5-inchSAS (6G) supported, HDD selection (146.5GB/300GB/450GB/600GB/900GB/160GB/500GB/1TB)SSD selection (100GB), Mezzanine slot x2,1000BASE-X x2 ports 1000BASE-X

(On board)

×2

AD106b basic configurationEL4x64EL42003R2x642003R22008x642008 ■ ■ EL5x64 ■EL5 ■2008 R2■ EL6x64■EL6 ■

* Intelligent Switchor Pass-Through Cardis required

Page 211: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O Blade

Notes- This blade must be installed in the next slot of a CPU blade.- See the table below for the supported blades.

- Connection with a CPU blade should be made in each red frame slots combination shown below.- AD106b does not have Hot Plug feature. CPU Blade must be turned off to disconnect the AD106b.- To use AD106b, you may need to update EM firmware of the Blade Enclosure.(Confirm on the download page of NEC web site (http://www.nec.com/global/prod/express/))

Slot positions of SIGMABLADE-H v2 Slot positions of SIGMABLADE-M

AD106b

O : Supported X : Not supported*1 OS boot from AD106b not supported*2 Linux OS is not supported*3 Red Hat Enterprise Linux 6.0 (x86, x86_64)

supported

B120b B120a B120b-h B120b-d B120a-d B140a-T

AD106b O*1 *3 O*1 *2 O*1 O O*2 X

RAIDcontroller

ChipsetRAID

controller

CPU blade AD106b

PCI Express

Blade Enclosure Backplane

PCI Express connection image of CPU blade and AD106b

Slot(2)Slot (1)

Slot (4)Slot(3)

Slot(6)Slot (5)

Slot(8)Slot (7)

Slot(10)Slot(9)

Slot(12)Slot (11)

Slot(14)Slot(13)

Slot(16)Slot(15)

Slot(2)Slot(1)

Slot(4)Slot(3)

Slot(6)Slot(5)

Slot(8)Slot(7)

Page 212: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O Blade

Front View

(1)Hard disk driveUpper berth, from left, slot0,1,2Lower berth, from left, slot3,4,5

(2)POWER LED(3)STATUS LED(4)Ether1 Link/Access LED(5)Ether2 Link/Access LED(6)ID LED(7) Eject lever

AD106b

External View

Page 213: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O Blade

Storage and I/O Blade Configuration

AD106b

Hard disk drive

RAID controller(LSILogic SAS controller, 128MB cache, battery built-in)

HDDHDD HDD HDD HDD HDD

・300GB HDD [N8450-032] (2.5-inch SAS 10,000rpm, Carrier provided)・450GB HDD [N8450-033] (2.5-inch SAS 10,000rpm, Carrier provided)・600GB HDD [N8450-034] (2.5-inch SAS 10,000rpm, Carrier provided)・900GB HDD [N8450-035] (2.5-inch SAS 10,000rpm, Carrier provided)・146.5GB HDD [N8450-036] (2.5-inch SAS 15,000rpm, Carrier provided)・300GB HDD [N8450-039] (2.5-inch SAS 15,000rpm, Carrier provided)・160GB HDD [N8450-028] (2.5-inch SATA 7,200rpm , Carrier provided)

EL4x64EL42003R2x642003R22008x642008 ■ ■ EL5x64 ■EL5 ■2008 R2■ EL6x64■EL6 ■

160GB HDD [N8450-028] (2.5-inch SATA 7,200rpm , Carrier provided)・500GB HDD [N8450-029] (2.5-inch SATA 7,200rpm , Carrier provided)・1TB HDD [N8450-037] (2.5-inch SATA 7,200rpm , Carrier provided)・100GB SSD [N8450-702] (2.5-inch SATA, Carrier provided)

* SAS-HDD, SATA HDD and SSD cannot be mixed.* By using disk array function (RAID1,5,6,10,50), HotSwap (replacement during operation) is

possible.Identical size of disk drives are required for RAID functions.

* Different rotation speed Hard Disk Drives cannot be mixed.

Page 214: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O Blade

Standard LAN port

Standard LANInterface

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-H v2/-M

Mezzanine Slot 1 (Type-I only)

(1) Adding a LAN I/F

1GbE LAN interface card- 1000BASE-T(2ch) adapter [N8403-017] *1 *2- 1000BASE-T(2ch) adapter(for iSCSI) [N8403-021] *2

Mezzanine slot1

By installing additional LAN interfaces or Fibre Channel interfaces in the mezzanine slots of the AD106b, up to 8 LAN ports or 4 fibre channel ports are available. LAN interface card supports AFT/ALB teaming. However, you cannot configure AFT/ALB team with LAN interfaces installed in another slots.Note: Mezzanine cards not supported on the connected CPU blade (installed in the next slot of Storage and I/O Blade) are not supported.

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

Storage and I/O Blade Configuration

Note for network configuration* The standard LAN interface enables AFT and ALB.However, you cannot use the standard LAN interface and an optional LAN board in a team

to configure AFT or ALB.

EL4x64EL42008x642008 ■ ■ EL5x64 ■EL5 ■2008 R2■ EL6x64■EL6 ■2003R2x642003R2

*1: The vIO control function is not supported.*2: Connecting NEC Storage is not supported when the installed OS is RHEL6.

(2) Adding a 10GBASE-KR I/F

10GBASE-KR LAN interface card- 10GBASE-KR adapter(2ch) [N8403-035]

Mezzanine slot1 To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

* Can be connected to 10GbE Pass-Through card and 10Gb Intelligent L3 Switch (N8406-051F) only.10GBASE-SR SFP+ modules should be installed to the 10GbE Pass-Through card.Connecting NEC Storage is not supported when the installed OS is RHEL6.

(3) Adding a Fibre channel I/F (4Gbps)

Mezzanine slot1

To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

4Gb Fibre channel interface card- Fibre Channel controller(4Gbps/2ch) [N8403-018]

(4) Adding a Fibre channel I/F (8Gbps)

Mezzanine slot1To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

8Gb Fibre channel interface card- Fibre Channel controller(8Gbps/2ch) [N8403-034]

2008x6420082008 R2 R2 R2x64 EL5x64EL5

EL4x64EL4R2x64R22008x642008 EL5x64EL5 2008 R2

2008x642008 ■ ■ EL5x64 ■EL5 ■2008 R2■ EL6x64■EL6 ■2003R2x642003R2

Page 215: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Storage and I/O Blade

Mezzanine slot 2 (Type-I or Type-II)

(1) Adding a 2ch LAN I/F

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 2

By installing additional LAN interfaces or Fibre Channel interfaces in the mezzanine slots of the AD106b, up to 8 LAN ports or 4 fibre channel ports are available.LAN interface card supports AFT/ALB teaming. However, you cannot configure AFT/ALB teaming with LAN interfaces installed in another slots.Note: Mezzanine cards not supported on the connected CPU blade (installed in the next slot of Storage and I/O Blade) are not supported.

Storage and I/O Blade Configuration

(2) Adding a 4ch LAN I/F

1GbE LAN interface card・1000BASE-T(4ch) adapter [N8403-020] *1 *2 *3・1000BASE-T(4ch) adapter(for iSCSI) *2 *3

[N8403-022]

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 2

1GbE LAN interface card・1000BASE-T(2ch) adapter [N8403-017] *1 *2・1000BASE-T(2ch) adapter(for iSCSI) [N8403-021] *1 *2

*1: The vIO control function is not supported.*2: Connecting NEC Storage is not supported when the installed OS is RHEL6.

EL4x64EL4R2x64R22008x642008 ■ ■ EL5x64 ■EL5 ■2008 R2■ EL6x64■EL6 ■EL4x64EL42008x642008 ■ ■ EL5x64 ■EL5 ■2008 R2■ EL6x64■EL6 ■2003R2x642003R2

EL4x64EL42008x642008 ■ ■ EL5x64 ■EL5 ■2008 R2■ EL6x64■EL6 ■2003R2x642003R2

*1: The vIO control function is not supported.*2: When you use this Adapter with SIGMABLADE-M, install the 1Gb Intelligent Switch in the switch module slots 5

and 6. You cannot install the 1Gb Pass-Through Card.When you use this Adapter with SIGMABLADE-H v2, install the 1Gb Intelligent Switch or 1Gb Pass-Through

Card in the switch module slots 5,6,7 and 8.*3: Connecting NEC Storage is not supported when the installed OS is RHEL6.

(3) Adding a 10GBASE-KR I/F10GBASE-KR LAN interface card・10GBASE-KR adapter(2ch) [N8403-035]

Mezzanine slot 2 To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

* Can be connected to 10GbE Pass-Through card and 10Gb Intelligent L3 Switch (N8406-051F) only.10GBASE-SR SFP+ modules should be installed to the 10GbE Pass-Through card.Connecting NEC Storage is not supported when the installed OS is RHEL6.

(4) Adding a Fibre channel IF (4Gbps)

Mezzanine slot 2To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADEHv2/-M

4Gb Fibre channel interface card・Fibre Channel controller(4Gbps/2ch) [N8403-018]

(5) Adding a Fibre channel IF (8Gbps)

Mezzanine slot2To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADEHv2/-M

8Gb Fibre channel interface card ・Fibre Channel controller(8Gbps/2ch) [N8403-034]

2008x6420082008 R2 R2 R2x64 EL5x64EL5

EL4x64EL4R2x64R22008x642008 EL5x64EL5 2008 R2

2008x642008 ■ ■ EL5x64 ■EL5 ■2008 R2■ EL6x64■EL6 ■2003R2x642003R2

Page 216: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Tape Blade AT101a

Tape Blade (AT101a)

ICON The icons on the configuration chart stand for the supported OS as shown in the following table. For the request to use Linux OS,please also refer to the website

http://www.nec.com/global/prod/express/linux/index.html or contact a sales representative.

・・・ Supported・・・ Certified by Distributor

2008R2

2003R22003R2x64

2003x64

:Windows Server 2008 R2(x64)

:Windows Server 2003,x64 Edition:Windows Server 2003 R2:Windows Server 2003 R2,x64 Edition

20082008x64

:Windows Server 2008:Windows Server 2008(x64)

ESXi4.1 :VMware ESXi 4.1

2003 :Windows Server 2003(with SP1 or later) EL4 : Red Hat Enterprise Linux ES4/AS4EL4x64 : Red Hat Enterprise Linux ES4(EM64T)/AS4(EM64T)

EL5x64: Red Hat Enterprise Linux 5/AP5EL5: Red Hat Enterprise Linux 5(EM64T)/AP5(EM64T)

EL6x64: Red Hat Enterprise Linux 6EL6: Red Hat Enterprise Linux 6(x86_64)

Page 217: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

AT101aTape Blade supporting LTO4

Features-LTO4 supported-Equipped with 2 Mezzanine slot- 2x 1000BASE-X ports as standards

Tape Blade (AT101a)

Tape Blade AT101aN8404-002

Capacity 800Gbyte (Non-compressed, LTO4)Slot 1[0]

Type-1 slot x1[1] , Type-2 slot x1[1] (Type-1 supported)1000BASE-X(connected to Mid Plane) x2

5kg52*516*180

SIGMABLADE-M , SIGMABLADE-H v2120W(DC)

Operation time: During operation: 10 to 35C / 20 to 80% (Non-condensing)When stored: 10 to 45C / 10 to 80% (Non-condensing)

Tape slot [open]

Temperature / humidity conditions

Supported EnclosurePower Consumption

Model nameN-Code

Size (WxDxH mm)

Mezzanine slot [open]InterfaceWeight (a maximum)

Note:-This product does not support the connection with B140a-T

Page 218: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

AT101a Quick Sheet

Tape Blade AT101aExpansion Slot

Mezzanine slot1

Mezzanine slot2

LTO slot

Tape Blade (AT101a)

Supported Data cartridge:

・LTO2 data cartridge (Read only)・LTO3 data cartridge・LTO3 WORM data cartridge・LTO4 data cartridge・LTO4 WORM data cartridge

Mezzanine Cards for Mezzanine Slots

SIGMABLADE-H v2,M(CPU Blade slot)

* 1000BASE-T(2ch) adapter [N8403-017]* 1000BASE-T(4ch) adapter*1 [N8403-020]* 1000BASE-T(2ch) adapter(for iSCSI)*2 [N8403-021]* 1000BASE-T(4ch) adapter(for iSCSI)*1,*2 [N8403-022]* 10GBASE-KR adapter(2ch) [N8403-035]* Fibre Channel Controller (4Gbps/2ch) [N8403-018]* Fibre Channel Controller (8Gbps/2ch) [N8403-034]

Note: Mezzanine cards not supported on the connected CPU blade (installed in the next slot of Storage and I/O Blade) are not supported.*1:only installed in the type 2 mezzanine slot*2:Required to use vIO control function of EM card

NoteTo install new options, please update BIOS, FW, driver and EM firmware of the Blade Enclosure to the latest versions.

Page 219: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

AT101a basic configuration

・ N8405-016BSIGMABLADE-M

DimensionsWidth (mm) :484.8Depth (mm) :829Height (mm) :264.2

(6U)* Protruding objects included

* Intelligent Switch

・ N8404-002 Tape Blade AT101aLTO drive x1, Mezzanine slot x2,1000BASE-X x 2 ports

1000BASE-X(On board)

×2

・ N8405-040AFSIGMABLADE-H v2

DimensionsWidth (mm) :483Depth (mm) :823Height (mm) :442

(10U)* Protruding objects included

EL4x64EL4R2x64R22008x642008 ■ ■ EL5x64■EL5 ■2008 R2■

Tape Blade (AT101a)

* Intelligent Switchor Pass-Through Cardis required

Page 220: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

AT101aNotes- This blade must be installed in the next slot of a CPU blade.- Supported CPU Blades are the following:

Express5800/B120a Express5800/B120b Express5800/B120b-LwExpress5800/B120a-d Express5800/B120b-d Express5800/B120b-h

- Connection with a CPU blade should be made in each red frame slots combination shown below.- AT101a does not have Hot Plug feature. CPU Blade must be turned off to disconnect it.- To use AT101a, you may need to update EM firmware of the Blade Enclosure.(Confirm on the download page of NEC web site (http://www.nec.com/global/prod/express/))

Tape Blade (AT101a)

Slot positions of SIGMABLADE-H v2 Slot positions of SIGMABLADE-M

Slot(2)Slot (1)

Slot (4)Slot(3)

Slot(6)Slot (5)

Slot(8)Slot (7)

Slot(2)Slot(1)

Slot(4)Slot(3)

Slot(6)Slot(5)

Slot(8)Slot(7)

RAIDcontroller

ChipsetLTODrive

CPU blade AT101a

PCI Express

Blade Enclosure Backplane

PCI Express connection image of CPU blade and AT101a

(2)Slot (1)

Slot (4)(3)

(6)Slot (5)

(8)Slot (7)

Slot(10)Slot(9)

Slot(12)Slot (11)

Slot(14)Slot(13)

Slot(16)Slot(15)

(2)(1)

(4)(3)

(6)(5)

(8)(7)

Page 221: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

AT101a

Front View

Tape Blade (AT101a)

(1) LTO drive(2) POWER LED(3) STATUS LED(4) Ether1 Link/Access LED(5) Ether2 Link/Access LED(6) ID LED(7) Eject lever(8) Pull-out tab(9) Cartridge door

External View

Page 222: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Tape Blade Configuration

AT101a

Tape drive

SAS controller(LSILogic SAS controller)

LTO drive

EL4x64EL4R2x64R22008x642008 ■ ■ EL5x64■EL5 ■2008 R2■

Tape Blade (AT101a)

Supported data cartridge:

・LTO2 data cartridge (Read only)・LTO3 data cartridge・LTO3 WORM data cartridge・LTO4 data cartridge・LTO4 WORM data cartridge

Page 223: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Standard LAN port

Standard LANInterface

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-H v2/-M

Mezzanine Slot 1 (Type-I only)By installing additional LAN interfaces or Fibre Channel interfaces in the mezzanine slots of the AT101a, up to 8 LAN ports or 4 fibre channel ports are available.LAN interface card supports AFT/ALB teaming. However, you cannot configure AFT/ALB team with LAN interfaces installed in another slots.Note: Mezzanine cards not supported on the connected CPU blade (installed in the next slot of Storage and I/O Blade) are not supported.

Tape Blade Configuration

Note for network configuration* The standard LAN interface enables AFT and ALB.However, you cannot use the standard LAN interface and an optional LAN board in a team

to configure AFT or ALB.

Tape Blade (AT101a)

(1) Adding a LAN I/F

1GbE LAN interface card- 1000BASE-T(2ch) adapter [N8403-017] *1- 1000BASE-T(2ch) adapter(for iSCSI) [N8403-021]

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

EL4x64EL4R2x64R22008x642008 EL5x64EL5 2008 R2

Mezzanine slot1

*1: The vIO control function is not supported.

(2) Adding a 10GBASE-KR I/F

10GBASE-KR LAN interface card- 10GBASE-KR adapter(2ch) [N8403-035]

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

* Can be connected to 10GbE Pass-Through card and 10Gb Intelligent L3 Switch (N8406-051F) only.10GBASE-SR SFP+ modules should be installed to the 10GbE Pass-Through card.

R2x64R22008x642008 EL5x64EL5 2008 R2

(3) Adding a Fibre channel I/F (4Gbps)To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

4Gb Fibre channel interface card- Fibre Channel controller(4Gbps/2ch) [N8403-018]

(4) Adding a Fibre channel I/F (8Gbps)

To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

8Gb Fibre channel interface card- Fibre Channel controller(8Gbps/2ch) [N8403-034]

2008x6420082008 R2 R2 R2x64 EL5x64EL5

EL4x64EL4R2x64R22008x642008 EL5x64EL5 2008 R2

Mezzanine slot1

Mezzanine slot1

Mezzanine slot1

* RHEL5 version should be RHEL5.4 or later.

Page 224: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Mezzanine slot 2 (Type-I or Type-II)By installing additional LAN interfaces or Fibre Channel interfaces in the mezzanine slots of the AT101a, up to 8 LAN ports or 4 fibre channel ports are available.LAN interface card supports AFT/ALB teaming. However, you cannot configure AFT/ALB teaming with LAN interfaces installed in another slots.Note: Mezzanine cards not supported on the connected CPU blade (installed in the next slot of Storage and I/O Blade) are not supported.

Tape Blade ConfigurationTape Blade (AT101a)

(1) Adding a 2ch LAN I/F

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 2

(2) Adding a 4ch LAN I/F

1GbE LAN interface card・1000BASE-T(4ch) adapter *1 *2 [N8403-020]・1000BASE-T(4ch) adapter(for iSCSI) *2

[N8403-022]

To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADEHv2/-M

Mezzanine slot 2

1GbE LAN interface card・1000BASE-T(2ch) adapter [N8403-017] *1・1000BASE-T(2ch) adapter(for iSCSI) [N8403-021]

EL4x64EL4R2x64R22008x642008 EL5x64EL5 2008 R2

EL4x64EL4R2x64R22008x642008 EL5x64EL5 2008 R2

*1: The vIO control function is not supported.

*1: The vIO control function is not supported.*2: When you use this Adapter with SIGMABLADE-M, install the 1Gb Intelligent Switch in the switch module slots 5

and 6. You cannot install the 1Gb Pass-Through Card.When you use this Adapter with SIGMABLADE-H v2, install the 1Gb Intelligent Switch or 1Gb Pass-Through

Card in the switch module slots 5,6,7 and 8.

(3) Adding a 10GBASE-KR I/F10GBASE-KR LAN interface card・10GBASE-KR adapter(2ch) [N8403-035]

Mezzanine slot 2 To Connect Switch Modules to LAN,See “Switch Module Slot” of SIGMABLADE-Hv2/-M

* Can be connected to 10GbE Pass-Through card and 10Gb Intelligent L3 Switch (N8406-051F) only.10GBASE-SR SFP+ modules should be installed to the 10GbE Pass-Through card.

2008x6420082008 R2 R2 R2x64 EL5x64EL5

(4) Adding a Fibre channel IF (4Gbps)

Mezzanine slot 2To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADEHv2/-M

4Gb Fibre channel interface card・Fibre Channel controller(4Gbps/2ch) [N8403-018]

(5) Adding a Fibre channel IF (8Gbps)

Mezzanine slot2To Connect Switch Modules to Fibre Channel,See “Switch Module Slot” of SIGMABLADEHv2/-M

8Gb Fibre channel interface card ・Fibre Channel controller(8Gbps/2ch) [N8403-034]

2008x6420082008 R2 R2 R2x64 EL5x64EL5

EL4x64EL4R2x64R22008x642008 EL5x64EL5 2008 R2

* RHEL5 version should be RHEL5.4 or later.

Page 225: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

3. Management Software

Page 226: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

SigmaSystemCenter

Management software (SigmaSystemCenter)

Page 227: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Management software (SigmaSystemCenter)

SigmaSystemCenter

SigmaSystemCenter (SigmaSystemCenter) condenses virtualization/autonomoustechnology NEC has cultivated through its abundant results in Blade Server management. It provides flexible management functions enabling you to maximallyutilize the advanced functions and performance of SIGMABLADE.

Features

►Multi-platform/Multi-OS integrated managementCan manage multi-OS including Windows and Linux and manage a wide variety of server hardware of Express5800 Series including SIGMABLADE.

►Optimal deployment of IT resourcesCan easily change server configurations and business applications by GUI operation and scheduler setting.Can utilize IT resources. Besides, can rapidly restore, as a backup server automatically substitutes the function even in the event of a failure, if you prepares at least 1 unit of backup server for multiple business servers.

►Collaboration between network and storage (*)Changes the configurations of connected network and storage at the same time as failure restoration/ installation/configuration change.

►Management of virtual serverCan manage virtual servers on VMware ESX Server in collaboration with VMware VirtualCenter.

►Server consolidation/Client consolidationMeets various applications such as server consolidation and client consolidation. Can freely create a server with a configuration meeting an application, freely combining server- and client- services and others.

導入

運用

管理

増設

可用性

Combine cost reduction and availability.Recovers, configuring an alternative by a shared backup server in the event of a server failure.

Simple server expansion workAdds a new server only by drag & drop.

Unified management of multiple serversResolves complexity by management on a per group.Timely display of resource status

Efficient operation of resourcesOptimally deploys resources according to system load status.

Collective construction/setting of multiple serversRemote installation of OS from management screenCollective application of batch to multiple BladesIntroduction

Expansion

Availability

Operation

Management

(*) Plan to support network cooperation function in the future.

Page 228: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Product System

SigmaSystemCenter inherits the functions of management software “SystemGlobe BladeSystemCenter” of Blade Server with results in management of Blade Serverand also is a product enhanced functions and widened supported platforms.

SigmaSystemCenter Standard EditionEdition for the server platforms of Express5800 Series including SIGMABLADE and OS’s of Windows/Linux server and Windows XP client.Can be used for integrated management of department servers and scattered systems, and also small andmidsized systems such as client consolidation.

For details of SigmaSystemCenter such as operating environment,refer to http://www.ace.comp.nec.co.jp/SSC/. (Japanese only)

Standard Edition

Enterprise Edition

* A managed license is required separately.Supported platforms

(BladeSystemCenter)

Management software (SigmaSystemCenter)

Page 229: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Has a backup server joined service by a simple action. Can respond to business changes.

•Installs from OS to applications only by dragging and dropping an automatically recognized server on the operation management screen on a job group. Can easily add a server.

•Can automatically add a backup server to a job group in the event of a load fault of a job group by combining performance monitoring tools.

Function-Flexibility

Addition of a server with a single touch of bottom

to enhance the performance of a whole system by increasing the number of server.

Scale-out is

Job A Job A Job A Job A Job Job

Backup

Add a server by Add a server by a simple operation.a simple operation.(Scale(Scale--out)out)

Management software (SigmaSystemCenter)

Page 230: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Easily changes configuration by GUI operation and scheduler. Reduces the load of system administrators.

•Can change system configurations according to the plan due to the schedule set beforehand.

•Can switch software on a server by drag & drop on the operation management screen and simple command.

•Can use different OS’s per job.

Function-FlexibilitySimple configuration change (usage change)

Changes the configuration of network devices (switch and load balancer) connected to a server in response to the configuration change of a server. Can further reduce the load necessary for the configuration change.

•Updates the setting of switch connected to a server when changing a server configuration at the same time and switches VLAN.

•Automatically updates the setting for load balance of load balancer.

•For software necessary for automatic switch of network and manageable devices, refer to the pages of the operating environment of SigmaSystemCenter Web page.

Automatic switch of network (*)

According to load balance during dayAccording to load balance during dayand night, switch a system configurationand night, switch a system configurationaccording to the plan. according to the plan.

Job A Job A

Job A Job A Job AJob A

Job B Job B Job B Job B

Job BJob B

Job B Job BJob A Job A Job A Job A

Addition to VLAN groupAddition to VLAN group

AdditionAddition of serverof server

Switch

Management software (SigmaSystemCenter)

(*) Plan to support network cooperation function in the future.

Page 231: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Unifies management of system such as operating status monitoring and failure monitoring.

•Can timely refer to the status of use of server resources and operating status.

•Can meticulously manage the enclosure information of Express5800/BladeServer and CPU Blade information and etc.

•Detects errors such as CPU, memory and DISK regularly and alerts them to an administrator immediately in the event of failures.

Function-Maintainability

Server Status Monitoring

Manages multiple servers on a per group. Can efficiently manage operation.

•The software attributes of destination job group are applied (installation of OS and applications) only by dragging and dropping the icon of a server.

Server Configuration Management

Management software (SigmaSystemCenter)

Page 232: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Function-Maintainability

Collectively applies the updates of OS and applications. Can surely update by a simple operation.

•Can collectively distribute batches of OS and updates of applications per job group or to a whole system.

•Can collectively install software for multiple servers when configuring a system.

Collective distribution of software

Collective distribution of applications Collective distribution of applications and batches by a simple operation.and batches by a simple operation.

Management software (SigmaSystemCenter)

Page 233: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Switches to a backup server and a server with low priority and minimizes business down time.

•Detects a failure of a server and automatically isolates the failed server and switches it to a backup server.

•Can share a backup server with different jobs/OS’s.

Function-Availability

N+1 recovery

to restore by a backup server in the event of a business server failure,by preparing one backup server for N-number servers.

Note) Restores a server by installing the disk image backed up in advance on a backup server. There is nothing to show taking over service in the event of a failure.

N+1 recovery is

Job A Job AJob A Job B Job B

Backup

Switch to a backup server Switch to a backup server in the event of a failurein the event of a failure

Management software (SigmaSystemCenter)

Page 234: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Changes the configuration of a storage connected to a server according to the configuration change of the server. Can further reduce the load necessary for the configuration change.

•Can automatically switch data on a storage according to the configuration change of server.

Function-Availability

Change of storage pass

Job A Job BJob A Job BJob BJob A

Backup

Replacement of serverReplacement of server

change of storage passchange of storage pass

Management software (SigmaSystemCenter)

Page 235: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Realizes higher availability by adding “failover” function in CLUSTERPRO to “N+1 recovery” function in SigmaSystemCenter.

•Can rapidly take over a job to a stand-by server by the failover of CLUSTERPRO in the event of a failure.

•Isolate network/storage from a failed server and can automatically implement setup/reclustering for a new stand-by server out of backup servers.

•Can prepare for a failure of active servers again by automatically implementing a series of process mentioned above.

•The backup server prepared for the restoration of a stand-by server can be also used as the backup server of other stand-by server. With this, you can utilize resources.

Function-Availability

Collaboration with CLUSTERPRO

Job A Job A

Job A Job A

BackupStand-by

Stand-by

series

series

Failure occurrence! Implement failover Implement failover by CLUSTERPRO by CLUSTERPRO and take over to aand take over to astandstand--by server.by server.

Isolate a failed server Isolate a failed server and implement setup and implement setup and reand re--clustering for a clustering for a backup server to backup server to become a standbecome a stand--by by server.server.

AutomaticAutomaticexecutionexecution

Management software (SigmaSystemCenter)

Page 236: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Notes on Sales Activity

The following products are not commercialized: (as of April, 2008)

- SigmaSystemCenter Enterprise edition

- Products of SIGMABLADE controller

- Power management basic pack for SigmaSystemCenter

The following functions are not provided: (as of April, 2008)

- HP-UX management function

- OracleRAC cooperation function

- Network cooperation function

Management software (SigmaSystemCenter)

Page 237: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

NEC Express5800/BladeServer HW List 2012/2/1

NEC EXPRESS5800/Blade Server series Hardware ListN : New Product D : Discontinued Product R: RPQ Basis Product

Prodcut Code Product name Description Blade Enclosure Shipment Discontinued Windows Linux

120Bb-6 120Bb-d6 120Bb-m6 B140a-T B120a B120b B120a-d B120b-d B120b-h AD106a AD106b AT101a SIGMABLADE-M

SIGMABLADE-H

SIGMABLADE-H v2

Date *1 Date 2003R2 2008 2008R2

Remarks RHEL RemarksN8400-029F030F031F032F

N8400-033F037F050F051F052F053F

N8400-053AF

N8400-057AF058AF059AF060AF062AF

N8400-040F042F043F

N8400-064AF065AF066AF067AF091F

N8400-046F048F049F

N8400-071AF072AF073AF074AF092F

N8400-055AF078F079F

N8400-081F082F083F084F085F

N8400-086F087F088F089F090F

N8404-001F

N8404-003F

N8404-002

N8405-016AF

N8405-024F

N8405-040F

040AF

Standard E

dition

Enterprise E

dition

Standard x64 E

dition

Enterprise x64 E

dition

Standard(x64/x86)

Enterprise(x64/x86)

Standard(x64)

Enterprise(x64)

ES

4

AS

4

ES

4(EM

64T)

AS

4(EM

64T)

5 AP

5

5(EM

64T)

AP

5(EM

64T)

CPU BLADE

D N8400-055AF NEC Express5800/B140a-T(2C/E7220)

Xeon E7220 (2.93GHz/1066/2x4M) x 1 (MAX 4), Memory less, Diskless,NEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER,VALUMOware DianaScope

X X X X 2008/11/30 2011/09/29 O O O O O O O O O O

D N8400-078F NEC Express5800/B140a-T(4C/E7310)

Xeon E7310 (1.60GHz/1066/2x2M) x 1 (MAX 4), Memory less, Diskless,NEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER,VALUMOware DianaScope

X X X X 2008/11/30 2011/09/29 O O O O O O O O O O

D N8400-079F NEC Express5800/B140a-T(6C/E7450)

Xeon E7450 (2.40GHz/1066/3x3M) x 1 (MAX 4), Memory less, Diskless,NEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER,VALUMOware DianaScope

X X X X 2008/11/30 2011/09/29 O O O O O O O O O O

N8400-082F NEC Express5800/B120a(4C/E5504) 1x Xeon E5504 (2GHz, 4MB L3 Cache, 4C/4T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2009/07/31 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1 *1)Embedded MegaRAID not supported

N8400-083F NEC Express5800/B120a(2C/E5502) Xeon Nehalem-EP-1.86GHz x 1 (MAX 2), Memory less, Diskless,NEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2009/04/13 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1 *1)Embedded MegaRAID not supported

N8400-084F NEC Express5800/B120a(4C/X5550) Xeon Nehalem-EP-2.66GHz x 1 (MAX 2), Memory less, Diskless,NEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2009/04/13 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1 *1)Embedded MegaRAID not supported

N8400-085F NEC Express5800/B120a(4C/L5520) Xeon Nehalem-EP-Lv2.26GHz x 1 (MAX 2), Memory less, Diskless,NEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2009/04/13 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1 *1)Embedded MegaRAID not supported

N8400-082G NEC Express5800/B120a(4C/E5504) 1x Xeon E5504 (2GHz, 4MB L3 Cache, 4C/4T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X 2009/12/18 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1 *1)Embedded MegaRAID not supported

N8400-083G NEC Express5800/B120a(2C/E5502) Xeon Nehalem-EP-1.86GHz x 1 (MAX 2), Memory less, Diskless,NEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X 2009/12/18 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1 *1)Embedded MegaRAID not supported

N8400-084G NEC Express5800/B120a(4C/X5550) Xeon Nehalem-EP-2.66GHz x 1 (MAX 2), Memory less, Diskless,NEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X 2009/12/18 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1 *1)Embedded MegaRAID not supported

N8400-085G NEC Express5800/B120a(4C/L5520) Xeon Nehalem-EP-Lv2.26GHz x 1 (MAX 2), Memory less, Diskless,NEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X 2009/12/18 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1 *1)Embedded MegaRAID not supported

N8400-110F NEC Express5800/B120b(6C/X5670) 1x Xeon X5670 (2.93GHz, 12MB L3 Cache, 6C/12T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2010/04/26 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1

*1)Embedded MegaRAID not supported

N8400-111F NEC Express5800/B120b(6C/X5650)Westmere-EP(X5650) 2.66GHz,MEMless,HDDlessNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDERNot shipped to China

X X X X X X 2010/11/30 O O O O O OO*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1

*1)Embedded MegaRAID not supported

N8400-112F NEC Express5800/B120b(6C/E5645) 1x Xeon E5645 (2.40GHz, 12MB L3 Cache, 6C/12T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2011/02/14 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1

*1)Embedded MegaRAID not supported

N8400-113F NEC Express5800/B120b(4C/E5606) 1x Xeon E5606 (2.13GHz, 8MB L3 Cache, 4C/4T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2011/02/14 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1

*1)Embedded MegaRAID not supported

N8400-114F NEC Express5800/B120b(6C/L5640) 1x Xeon L5640 (2.26GHz, 12MB L3 Cache, 6C/12T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2010/04/26 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1

*1)Embedded MegaRAID not supported

N8400-110G NEC Express5800/B120b(6C/X5670) 1x Xeon X5670 (2.93GHz, 12MB L3 Cache, 6C/12T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2011/02/14 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1

*1)Embedded MegaRAID not supported

N8400-111G NEC Express5800/B120b(6C/X5650)1x Xeon X5650 (2.66GHz, 12MB L3 Cache, 6C/12T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDERFor China

X X X X X X 2011/02/14 O O O O O OO*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1

*1)Embedded MegaRAID not supported

N8400-112G NEC Express5800/B120b(6C/E5645) 1x Xeon E5645 (2.40GHz, 12MB L3 Cache, 6C/12T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2011/02/14 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1

*1)Embedded MegaRAID not supported

N8400-113G NEC Express5800/B120b(4C/E5606) 1x Xeon E5606 (2.13GHz, 8MB L3 Cache, 4C/4T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2011/02/14 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1

*1)Embedded MegaRAID not supported

N8400-114G NEC Express5800/B120b(6C/L5640) 1x Xeon L5640 (2.26GHz, 12MB L3 Cache, 6C/12T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2011/02/14 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1

*1)Embedded MegaRAID not supported

N8400-087F NEC Express5800/B120a-d(4C/E5504)

1x Xeon E5504 (2GHz, 4MB L3 Cache, 4C/4T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2009/06/19 O O O O O O O

*1O*1 *1) Need support kit O O O O O O O O

N8400-088F NEC Express5800/B120a-d(2C/E5502)

Xeon Nehalem-EP-1.86GHz x 1 (MAX 2), Memory less, Diskless,NEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2009/06/19 O O O O O O O

*1O*1 *1) Need support kit O O O O O O O O

N8400-089F NEC Express5800/B120a-d(4C/X5550)

Xeon Nehalem-EP-2.66GHz x 1 (MAX 2), Memory less, Diskless,NEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2009/06/19 O O O O O O O

*1O*1 *1) Need support kit O O O O O O O O

N8400-090F NEC Express5800/B120a-d(4C/L5520)

Xeon Nehalem-EP-Lv2.26GHz x 1 (MAX 2), Memory less, Diskless,NEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2009/06/19 O O O O O O O

*1O*1 *1) Need support kit O O O O O O O O

N8400-087G NEC Express5800/B120a-d(4C/E5504)

1x Xeon E5504 (2GHz, 4MB L3 Cache, 4C/4T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2009/12/18 O O O O O O O

*1O*1 *1) Need support kit O O O O O O O O

N8400-088G NEC Express5800/B120a-d(2C/E5502)

Xeon Nehalem-EP-1.86GHz x 1 (MAX 2), Memory less, Diskless,NEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2009/12/18 O O O O O O O

*1O*1 *1) Need support kit O O O O O O O O

N8400-089G NEC Express5800/B120a-d(4C/X5550)

Xeon Nehalem-EP-2.66GHz x 1 (MAX 2), Memory less, Diskless,NEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2009/12/18 O O O O O O O

*1O*1 *1) Need support kit O O O O O O O O

N8400-090G NEC Express5800/B120a-d(4C/L5520)

Xeon Nehalem-EP-Lv2.26GHz x 1 (MAX 2), Memory less, Diskless,NEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2009/12/18 O O O O O O O

*1O*1 *1) Need support kit O O O O O O O O

N8400-117F NEC Express5800/B120b-d (6C/X5670) 1x Xeon X5670 (2.93GHz, 12MB L3 Cache, 6C/12T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2010/04/26 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1

*1)Embedded MegaRAID not supported

N8400-121F NEC Express5800/B120b-d (6C/L5640) 1x Xeon L5640 (2.26GHz, 12MB L3 Cache, 6C/12T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2010/04/26 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1

*1)Embedded MegaRAID not supported

N8400-117G NEC Express5800/B120b-d (6C/X5670) 1x Xeon X5670 (2.93GHz, 12MB L3 Cache, 6C/12T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2011/02/14 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1

*1)Embedded MegaRAID not supported

N8400-121G NEC Express5800/B120b-d (6C/L5640) 1x Xeon L5640 (2.26GHz, 12MB L3 Cache, 6C/12T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X X X X 2011/02/14 O O O O O O

O*1*2

O*1*2

*1) Need support kit*2) Embedded RAID notsupported

O*1

O*1

O*1

O*1

O*1

O*1

O*1

O*1

*1)Embedded MegaRAID not supported

N8400-099F Express5800/B120b-h(6C/X5680)Westmere-EP -3.33GHz,MEMless,HDDlessNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDERNot shipped to China

X X X 2010/07/30 O O O O O O O O O O O O O O O O

1

Page 238: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

NEC Express5800/BladeServer HW List 2012/2/1

NEC EXPRESS5800/Blade Server series Hardware ListN : New Product D : Discontinued Product R: RPQ Basis Product

Prodcut Code Product name Description Blade Enclosure Shipment Discontinued Windows Linux

N8400-100F Express5800/B120b-h(6C/X5650)Westmere-EP -2.66GHz,MEMless,HDDlessNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDERNot shipped to China

X X X 2010/07/30 O O O O O O O O O O O O O O O O

N8400-102F Express5800/B120b-h(4C/E5645) 1x Xeon E5645 (2.40GHz, 12MB L3 Cache, 6C/12T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X 2011/02/14 O O O O O O O O O O O O O O O O

N8400-103F Express5800/B120b-h(6C/L5640)1x Xeon L5640 (2.26GHz, 12MB L3 Cache, 6C/12T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDERNot shipped to China

X X X 2010/07/30 O O O O O O O O O O O O O O O O

N8400-104F Express5800/B120b-h(4C/L5630)Westmere-EP -Lv2.13GHz,MEMless,HDDlessNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDERNot shipped to China

X X X 2010/11/30 O O O O O O O O O O O O O O O O

N8400-099G Express5800/B120b-h(6C/X5680) 1x Xeon X5680 (3.33GHz,12MB L3 Cache, 6C/12T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X 2011/02/14 O O O O O O O O O O O O O O O O

N8400-100G Express5800/B120b-h(6C/X5650) 1x Xeon X5650 (2.66GHz, 12MB L3 Cache, 6C/12T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X 2011/02/14 O O O O O O O O O O O O O O O O

N8400-102G Express5800/B120b-h(6C/E5645) 1x Xeon E5645 (2.40GHz, 12MB L3 Cache, 6C/12T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X X 2011/02/14 O O O O O O O O O O O O O O O O

N8400-103G Express5800/B120b-h(6C/L5640) 1x Xeon L5640 (2.26GHz, 12MB L3 Cache, 6C/12T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X 2011/02/14 O O O O O O O O O O O O O O O O

N8400-104G Express5800/B120b-h(4C/L5630) 1x Xeon L5630 (2.13GHz, 12MB L3 Cache, 4C/8T), no RAM, no HDDNEC ESMPRO Manager and Agent, NEC EXPRESSBUILDER X X 2011/02/14 O O O O O O O O O O O O O O O O

N8404-001F Storage and I/O Blade AD106a

Connected to adjacent CPU blade to add HDD and Mezzanine card to theCPU blade6 SAS/SATA HDD slot with HW-RAID 0/1/5/62 Mezzanine slot (Type1 x1, Type2 x1)

X X X X X X X X X X 2009/06/19 No CPU on this blade No CPU on this blade

N8404-003F Storage and I/O Blade AD106b

Connected to adjacent CPU blade to add HDD and Mezzanine card to theCPU blade6 SAS/SATA HDD slot with HW-RAID 0/1/5/62 Mezzanine slot (Type1 x1, Type2 x1)*1: Windows only supported

X*1 X X*1 X X X X X 2010/12/27 No CPU on this blade No CPU on this blade

N8404-002 Tape Blade AT101a Connected to adjacent CPU blade.2 Mezzanine slot (Type1 x1, Type2 x1) X X X X X X X X 2010/12/27 No CPU on this blade No CPU on this blade

Enclosure

N8405-016BF SIGMABLADE-M (6U) 6U, Power x0, fan x0, EM x0, SWMless, RoHS, SIGMABLADE monitor,SATA DVD-ROM Drive

X X X X X X X X X X X X X X X X X 2010/11/30

N8405-040AF SIGMABLADE-H v2 (10U) 10U, Power x0, fan x0, EM x0, SWMless, RoHS, SIGMABLADE monitor,SATA DVD-ROM X X X X X X X X X X X X X X X X X X 2011/01/07

ADDITIONAL POWER UNITS AND FANSN8405-012AF Power unit (200) Redundant, *200V only, RoHS 2006/07/20N8405-017F Power unit Redundant, *for N8405-016AF only, 1500W, RoHS X* X* X* X* X* X* X* X* X* X* X* X* X* X* X* 2007/01/26N8405-023AF Power unit (2 set) Redundant, *for N8405-016AF only, 2250W, RoHS x2 X* X* X* X* X* X* X* X* X* X* X* X* X* X* X* 2008/04/23N8405-044F Power unit Redundant, 0 as standard, up to 6 as option *for N8405-040F only,

2250W, RoHSX* X* X* X* X* X* X* X* X* X* X* X* X* X* X* X 2008/09/30

N8405-055F Power unit Redundant, *for N8405-016BF only, 80PLUS X* X* X* X* X* X* X* X* X* X* X* X* X* X* X* 2010/11/30N8405-045 Additional Fan Redundant, 0 as standard, up to 10 as option for N8405-040F only, RoHS X* X* X* X* X* X* X* X* X* X* X* X* X* X* X* X 2008/09/30

N8405-053 Additional Fan Redundant, 0 as standard, up to 5 as option, *for N8405-016AF only,RoHS

X* X* X* X* X* X* X* X* X* X* X* X* X* X* X* 2010/02/08

N8405-019A Additional EM Card 0 as standard, up to 2 as option, *for N8405-016AF only, RoHS X* X* X* X* X* X* X* X* X* X* X* X* X* X* X* 2007/08/31N8405-043 Additional EM Card 0 as standard, up to 2 as option, *for N8405-040F only, RoHS X* X* X* X* X* X* X* X* X* X* X* X* X* X* X* X 2008/09/30

ADDITIONAL CPU BOARDSN8401-022 CPU Kit (XMPQ/2.40G(2x4M)) Xeon E7340(2.40GHz/1066/2x4M) 2007/10/01N8401-023 CPU Kit (XMPD/2.93G(2x4M)) Xeon E7220(2.93GHz/1066/2x4M) 2007/10/01N8401-032 CPU Kit (Tigerton(E7310) 1.60GHz) Tigerton (4c) E7310 1.60G x1, L2Cache S2Mx2, FSB 1066 X 2008/11/30N8401-033 CPU Kit (Dunnington(E7450) 2.40GHz) Dunnington(6c) E7450 2.40G x1, L2Cache S3Mx3, L3Cache 12M, FSB X 2008/11/30N8401-035 CPU Kit (Nehalem-EP(X5570) 2.93GHz) Nehalem-EP(4c) X5570 2.93G x1, Cache 8M, FSB 6.4GT/s X X 2009/06/19N8401-036 CPU Kit (Nehalem-EP(E5504) 2GHz) Nehalem-EP(4c) E5504 2.00G x1, Cache 4M, FSB 4.8GT/s X X 2009/06/19N8401-037 CPU Kit (2C/E5502) Nehalem-EP(2c) E5502 1.86G x1, Cache 4M, FSB 4.8GT/s X X 2009/04/13N8401-038 CPU Kit (Nehalem-EP(X5550) 2.66GHz) Nehalem-EP(4c) X5550 2.66G x1, Cache 8M, FSB 6.4GT/s X X 2009/04/13N8401-039 CPU Kit (Nehalem-EP(L5520) Lv2.26GHz) Nehalem-EP(4c) L5520 2.26G x1, Cache 8M, FSB 5.86GT/s X X 2009/04/13N8401-052F CPU Kit (6C/X5670) 1x Xeon X5670 (2.93GHz, 12MB L3 Cache, 6C/12T) X X 2011/03/25N8401-053F CPU Kit (6C/L5640) 1x Xeon L5640 (2.26GHz, 12MB L3 Cache, 6C/12T) X X 2011/03/25N8401-055F CPU Kit (6C/X5650) 1x Xeon X5650 (2.66GHz, 12MB L3 Cache, 6C/12T) X 2011/03/25N8401-056F CPU Kit (4C/E5645) 1x Xeon E5645 (2.40GHz, 12MB L3 Cache, 6C/12T) X 2011/03/25N8401-057F CPU Kit (4C/E5606) 1x Xeon E5606 (2.13GHz, 8MB L3 Cache, 4C/4T) X 2011/03/25N8401-046F CPU Kit (6C/X5680) 1x Xeon X5680 (3.33GHz,12MB L3 Cache, 6C/12T) X 2011/03/25N8401-047F CPU Kit (6C/X5650) 1x Xeon X5650 (2.66GHz, 12MB L3 Cache, 6C/12T) X 2011/03/25N8401-048F CPU Kit (4C/E5645) 1x Xeon E5645 (2.40GHz, 12MB L3 Cache, 6C/12T) X 2011/03/25N8401-050F CPU Kit (6C/L5640) 1x Xeon L5640 (2.26GHz, 12MB L3 Cache, 6C/12T) X 2011/03/25N8401-051F CPU Kit (4C/L5630) 1x Xeon L5630 (2.13GHz, 12MB L3 Cache, 4C/8T) X 2011/03/25

ADDITIONAL MEMORYN8402-033 Additional 2GB Memory Module set FBD-DDR2-667 SDRAM 1G x2,ROHS X 2007/10/01N8402-034 Additional 4GB Memory Module set FBD-DDR2-667 SDRAM 2G x2,ROHS X 2007/10/01N8402-035 Additional 8GB Memory Module set FBD-DDR2-667 SDRAM 4G x2,ROHS X 2007/10/01

D N8402-037 Additional 1GB Memory Module DDR3-1066 SDRAM-DIMM (Registerd) 1GB x1 X 2009/04/13 2011/08/30D N8402-038 Additional 2GB Memory Module DDR3-1066 SDRAM-DIMM (Registerd) 2GB x1 X 2009/04/13 2011/08/30D N8402-039 Additional 4GB Memory Module DDR3-1066 SDRAM-DIMM (Registerd) 4GB x1 X 2009/04/13 2011/08/30D N8402-040 Additional 8GB Memory Module DDR3-1066 SDRAM-DIMM (Registerd) 8GB x1 X 2009/04/13 2011/01/31

N8402-041 Additional 4GB Memory Module set DDR3-1333 SDRAM-DIMM (Unbufferd) 2GB x2 X 2009/04/13D N8402-042 Additional 1GB Memory Module DDR3-1066 SDRAM-DIMM (Registerd) 1GB x1 X 2009/06/19 2011/08/30D N8402-043 Additional 2GB Memory Module DDR3-1066 SDRAM-DIMM (Registerd) 2GB x1 X 2009/06/19 2011/08/30D N8402-044 Additional 4GB Memory Module DDR3-1066 SDRAM-DIMM (Registerd) 4GB x1 X 2009/06/19 2011/08/30

N8402-045 Additional 8GB Memory Module DDR3-1066 SDRAM-DIMM (Registerd) 8GB x1 X 2009/06/19N8402-046 Additional 4GB Memory Module set DDR3-1333 SDRAM-DIMM (Unbufferd) 2GB x2 X 2009/06/19N8402-048F Additional 16GB Memory Module DDR3-1066 SDRAM-DIMM (Registerd) 16GB x1 for B120a X X 2011/03/25N8402-057F Additional 16GB Memory Module DDR3-1066 SDRAM-DIMM (Registerd) 16GB x1 for B120a-d X 2011/03/25

D N8402-058F Additional 1GB Memory Module DDR3-1333 Registerd 1GB×1 for B120b X 2011/03/25 2011/12/27D N8402-059F Additional 2GB Memory Module DDR3-1333 Registerd 2GBx1 for B120b X 2011/03/25 2011/12/27D N8402-060F Additional 4GB Memory Module DDR3-1333 Registerd 4GBx1 for B120b X 2011/03/25 2011/12/27

N8402-075F Additional 2GB Memory Module DDR3-1333 Registerd 2GBx1 for B120b X X 2011/07/26N8402-076F Additional 4GB Memory Module DDR3-1333 Registerd 4GBx1 for B120b X X 2011/07/26N8402-061F Additional 8GB Memory Module DDR3-1333 Registerd 8GBx1 for B120b X X 2011/03/25

D N8402-063F Additional 1GB Memory Module DDR3-1333 Registerd 1GB×1 for B120b-d X 2011/03/25 2011/12/27D N8402-064F Additional 2GB Memory Module DDR3-1333 Registerd 2GBx1 for B120b-d X 2011/03/25 2011/12/27D N8402-065F Additional 4GB Memory Module DDR3-1333 Registerd 4GBx1 for B120b-d X 2011/03/25 2011/12/27

N8402-080F Additional 2GB Memory Module DDR3-1333 Registerd 2GBx1 for B120b-d X 2011/07/26N8402-081F Additional 4GB Memory Module DDR3-1333 Registerd 4GBx1 for B120b-d X 2011/07/26N8402-066F Additional 8GB Memory Module DDR3-1333 Registerd 8GBx1 for B120b-d X X 2011/03/25

D N8402-050F Additional 2GB Memory Module DDR3-1333 Registerd 2GB×1 for B120b-h X 2011/03/25 2011/12/27D N8402-051F Additional 4GB Memory Module DDR3-1333 Registerd 4GB×1 for B120b-h X 2011/03/25 2011/12/27

N8402-088F Additional 2GB Memory Module DDR3-1333 Registerd 2GB×1 for B120b-h X 2011/07/26N8402-089F Additional 4GB Memory Module DDR3-1333 Registerd 4GB×1 for B120b-h X 2011/07/26N8402-052F Additional 8GB Memory Module DDR3-1333 Registerd 8GB×1 for B120b-h X 2011/03/25N8402-053F Additional 16GB Memory Module DDR3-1333 Registerd 16GB×1 for B120b-h X 2011/03/25

ADDITIONAL BOARDSN8403-017 1000BASE-T (2ch) 2ch, PCI EXPRESS, RoHS X X X X X X X X X X X X X X X X X 2006/11/30N8403-018 Fibre Channel Controller 4Gbps port x2, PCI EXPRESS, RoHS X X X X X X X X X X X X X X X X X 2006/11/30N8403-019 Disk Array Controller RAID 0/1 X X X X 2006/11/30N8403-020 1000BASE-T (4ch) 4ch, PCI EXPRESS, RoHS X X X X X X X X X X X X X X X X X 2007/04/30N8403-021 1000BASE-T (2ch) 2ch, PCI EXPRESS, RoHS (for iSCSI boot) X X X X X X X X X X X X X X X X X 2008/07/31N8403-022 1000BASE-T (4ch) 4ch, PCI EXPRESS, RoHS (for iSCSI boot) X X X X X X X X X X X X X X 2008/12/26N8403-024 10GbE Adapter (2ch) 2ch, PCI EXPRESS, RoHS X X X X X X X X X X X X X 2008/09/30N8403-026 RAID Controller SAS / SATA HW-RAID 0/1 X X 2009/04/13N8403-027 SATA Interface card SATA SW-RAID 0/1 (Windows only) X X 2009/04/13N8403-034 8Gb Fibre Channel Controller 8Gbps port x2, PCI EXPRESS, RoHS X X X X X X X X 2010/07/30N8403-035 10Gb-KR (2ch) 2ch, PCI EXPRESS, RoHS X X X X X X X X 2010/05/31N8403-038F VMware ESXi 4.1 support kit VMWare installation must be completed before going to customer X 2010/09/24N8403-040F VMware ESXi 4.1 support kit VMWare installation must be completed before going to customer X X 2010/09/24

HARD DISK DRIVESN8450-014 36.3GB HDD SAS, 10,000rpm, 2.5inch, carrier attached, RoHS X X X X X X 2006/11/30

2

Page 239: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

NEC Express5800/BladeServer HW List 2012/2/1

NEC EXPRESS5800/Blade Server series Hardware ListN : New Product D : Discontinued Product R: RPQ Basis Product

Prodcut Code Product name Description Blade Enclosure Shipment Discontinued Windows LinuxN8450-015 73.2GB HDD SAS, 10,000rpm, 2.5inch, carrier attached, RoHS X X X X X X X 2006/11/30N8450-017 146GB HDD SAS, 10,000rpm, 2.5inch, carrier attached, RoHS X X X X X X X 2007/04/30N8450-018 36.3GB HDD SAS, 15,000rpm, 2.5inch, carrier attached, RoHS X X X X 2008/04/23N8450-019 73.2GB HDD SAS, 15,000rpm, 2.5inch, carrier attached, RoHS X X X X 2008/04/23N8450-020 300GB HDD SAS, 10,000rpm, 2.5inch, carrier attached, RoHS X 2009/04/27N8450-021 146.5GB HDD SAS, 15,000rpm, 2.5inch, carrier attached, RoHS X 2009/05/29N8450-022 73.2GB HDD SAS, 10,000rpm, 2.5inch, carrier attached, RoHS

* Carrier is not compatible with previous modelX X X 2009/04/13

N8450-023 146.5GB HDD SAS, 10,000rpm, 2.5inch, carrier attached, RoHS* Carrier is not compatible with previous model

X X X 2009/04/13

N8450-024 300GB HDD SAS, 10,000rpm, 2.5inch, carrier attached, RoHS* Carrier is not compatible with previous model

X X X 2009/04/27

N8450-025 73.2GB HDD SAS, 15,000rpm, 2.5inch, carrier attached, RoHS* Carrier is not compatible with previous model

X X X 2009/04/13

N8450-026 146.5GB HDD SAS, 15,000rpm, 2.5inch, carrier attached, RoHS* Carrier is not compatible with previous model

X X X 2009/05/29

N8450-038 300GB HDD SAS, 15,000rpm, 2.5inch, carrier attached, RoHS* Carrier is not compatible with previous model

X X X 2011/07/26

N8450-028 160GB HDD SATA2, 7,200rpm, 2.5inch, carrier attached, RoHS* Carrier is not compatible with previous model

X X X X 2009/10/28

N8450-029 500GB HDD SATA2, 7,200rpm, 2.5inch, carrier attached, RoHS* Carrier is not compatible with previous model

X X X X 2009/10/28

N8450-030 600GB HDD SAS, 10,000rpm, 2.5inch, carrier attached, RoHS X X X 2010/07/30N8450-031 900GB HDD SAS, 10,000rpm, 2.5inch, carrier attached, RoHS X X X 2011/07/26N8450-032 300GB HDD 2.5inch SAS HDD,300GB,10krpm(6G SAS), for AD106b X 2010/12/27N8450-033 450GB HDD 2.5inch SAS HDD,450GB,10krpm(6G SAS), for AD106b X 2010/12/27N8450-034 600GB HDD 2.5inch SAS HDD,600GB,10krpm(6G SAS), for AD106b X 2010/12/27N8450-035 900GB HDD 2.5inch SAS HDD,900GB,10krpm(6G SAS), for AD106b X 2011/07/26N8450-036 146.5GB HDD 2.5inch SAS HDD,146.5GB,15krpm(6G SAS), for AD106b X 2010/12/27N8450-039 300GB HDD 2.5inch SAS HDD,300GB,15krpm(6G SAS), for AD106b X 2011/07/26N8450-037 1TB HDD 2.5inch SATA2 7.2K 1TB X X X X 2011/04/26

D N8450-701 50GB 2.5" SSD SATA2, 3Gbps, 2.5inch, for B120b-h X 2010/07/30 2011/04/28N8450-702 100GB 2.5" SSD SATA2, 3Gbps, 2.5inch, for StorageIO2 X 2011/02/14N8450-703 100GB 2.5" SSD SATA2, 3Gbps, 2.5inch, for B120b-h X 2011/02/14

NETWORK BLADE

N8406-023A 1Gb Intelligent L3 Switch Network module must be selected either from 1Gb Intelligent L2/L3Switch or 1Gb Pass-Through Card. X X X X X X X X X X X X X X X X X X 2010/07/30

N8406-044 1:10Gb Intelligent L3 Switch Internal (blade): 1Gbps EtherExternal (user port) : 10Gbps Ether X X X X X X X X X X X X X X X X X 2010/03/19

N N8406-051F 10Gb Intelligent L3 Switch 10GbE Intelligent L3 Switch for N8405-040AF/016BF X X 2012/01/31

N8406-026 10Gb Intelligent L3 Switch

10Gb Intelligent L3 Switch (4port)*To connect a 10GBASE-SR cable, you need an optional 10GBASE-SRXFP module [N8406-027] for each port. Purchase the 10GBASE-SR XFPmodule for each port you use.

X X X X X X X X X X X X X X X X X X 2008/09/30

N8406-011 1Gb Pass-Through Card Network module must be selected either from 1Gb Intelligent L2/L3Switch or 1Gb Pass-Through Card. *Installable to N8405-016AF only X* X* X* X* X* X* X* X* X* X* X* X* X* X* X* X 2006/12/15

N8406-029 1Gb Pass-Through Card Network module must be selected either from 1Gb Intelligent L2/L3Switch or 1Gb Pass-Through Card. *Installable to N8405-040F only X* X* X* X* X* X* X* X* X* X* X* X* X* X* X* X 2008/09/30

N8406-035 10Gb Pass-Through Card *Installable to N8405-016AF only X* X* X* X* X* X* X* X* X* X* X* X* X* X* X 2010/05/31N8406-036 10Gb Pass-Through Card *Installable to N8405-040F only X* X* X* X* X* X* X* X* X* X* X* X* X* X* X* X 2010/05/31

N8406-040 8G FC Switch(12port)8G FC Switch(12port)*To connect a FC cable, you need an optional FC SFP Module [N8406-041] for each port. Purchase the FC SFP Module for each port you use.

X X X X X X X X X X X X X X X X X 2009/11/30

N8406-042 8G FC Switch(24port)8G FC Switch(24port)*To connect a FC cable, you need an optional FC SFP Module [N8406-042] for each port. Purchase the FC SFP Module for each port you use.

X X X X X X X X X X X X X X X X 2009/11/30

N8406-021 FC Pass-Through Card

2/4G FC Pass-Through CardTo connect a FC cable, you need an optional FC SFP Module [N8406-015] for each port. Purchase the FC SFP Module for each port you use.* Installable to N8405-016AF only

X* X* X* X* X* X* X* X* X* X* X* X* X* X* X* X* 2006/12/15

N8406-030 FC Pass-Through Card

2/4G FC Pass-Through CardTo connect a FC cable, you need an optional FC SFP Module [N8406-015] for each port. Purchase the FC SFP Module for each port you use.*Installable to N8405-040F only

X* X* X* X* X* X* X* X* X* X* X* X* X* X* X* X* 2008/09/30

N8406-013 1Gb Interlink Expansion Card To Expand Interlink of L2 Switch(N8406-022)* Installable to N8405-016AF only

X* X* X* X* X* X* X* X* X* X* X* X* X* X* X* X* 2006/04/01

N8406-024 1000Base-SX SFP module for 1Gb Intelligent L3 Switch X X X 2007/10/01

N8406-027 10GBASE-SR XFP module for 10Gb Intelligent L3 Switch X X X 2008/09/30

N8406-037 10GBASE-SR SFP+ module for 10Gb Pass-Through Card N8406-035/036 X X 2010/05/31N8406-039 1000Base-T SFP module for 10Gb Pass-Through Card N8406-035/036 X X 2010/05/31

N8406-015 FC SFP module for FC Pass-Through Card N8406-021/030 X X 2006/12/15

NF9330-SF02 FC SFP module for FC Switch X X X 2006/12/15

N8406-041 FC SFP module for 8G FC Switch N8406-040/042 X X 2009/11/30EXTERNAL OPTIONS

N8405-032 Slot Cover Slot Blank Kit for N8405-016AF X 2006/11/30

N8405-046 Slot Cover(CPU Blade) Slot Cover(CPU Blade) for N8405-040F X 2008/09/30

N8405-033 Front Bezel Front Bezel for N8405-016AF X 2006/11/30

N8405-051 Front Bezel Front Bezel for N8405-040F X 2008/09/30

N8191-09AF Server Switch Unit(8Server/USB) Enables up to 8 servers to share a display, mouse(PS/2) and KB(PS/2).Can connect both USB & PS/2 ports of servers. RoHS compliant

X X X X X X X X X X X X X X 2006/05/29

N8870-002AF 104-key keyboard USB, 1.8m cable , RoHS X X X X X X X X X X X X X X 2006/07/31N8870-010A Mouse (USB) USB, 2 bottuns, RoHS X X X X X X X X X X X X X X 2006/05/31N8170-23 Mouse (PS/2) PS/2, 2-button with wheel, RoHS, *via SSU X* X* 2010/01/26N8403-032 coupler plate for AD106a Coupler Plate to couple two AD106a vertically X X X 2009/08/03

CABLES

K410-104(02) Display/keyboard extension cable 2m X X X X X X X X X X X X X X 2002/10/21

K410-104(03) Display/keyboard extension cable 3m X X X X X X X X X X X X X X 2002/10/21

K410-118(1A) Switch unit connection cable set(USB,1.8m) Connects Server switch unit[N8191-09AF] with servers (for USB I/Fs) X X X X X X X X X X X X X X 2002/10/21

K410-118(03) Switch unit connection cable set(USB,3m) Connects Server switch unit[N8191-09AF] with servers (for USB I/Fs) X X X X X X X X X X X X X X 2002/10/21

K410-118(05) Switch unit connection cable set (USB,5m) Connects Server switch unit[N8191-09AF] with servers (for USB I/Fs) X X X X X X X X X X X X X X 2002/10/21

K410-119(1A) Switch unit connection cable set (PS/2,1.8m)Connects Server switch unit[N8191-09AF] with servers(for ps/2 keyboard/mouse ports)Connects between two Server switch units[N8191-09AF]

X 2002/10/21

K410-119(03) Switch unit connection cable set (PS/2,3m) Connects Server switch unit[N8191-09AF] with servers (for PS/2 I/Fs) X 2002/10/21K410-119(05) Switch unit connection cable set (PS/2,5m) Connects Server switch unit[N8191-09AF] with servers (for PS/2 I/Fs) X 2002/10/21K410-84(05) Cross Cable (5m) D-sub 9-pin/D-sub 9-pin cross cable (5m) X X X X X X X X X X X X X X X X X 2005/01/19K410-107(03) ICMB cable 3m 2004/04/19K410-150(00) SUV Cable Serialx1,USBx2,Videox1,consolidated cable,Connect 120Bb to

SIGMABLADEX X X X X X X X X X X X X X 2006/11/30

N K410-203(03) 10GSFP+Cable(3m) For 10GbE Intelligent L3 Switch [N8405-051] X X 2012/01/31NF9320-SJ01E Fibre Channel Cable 5m X X X 2006/11/30NF9320-SJ02 Fibre Channel Cable 10m X X X 2006/11/30NF9320-SJ03 Fibre Channel Cable 15m X X X 2006/11/30

NF9320-SJ04 Fibre Channel Cable 20m X X X 2006/11/30*1: Shipping dates may differ according to country.

3

Page 240: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Note: AC cable for SIGMABLADE-Hv2Note: AC cable for SIGMABLADE-Hv2

• AC inlet of the new blade server is IEC C20 plug• AC cable should be

– female: <IEC320 C19> socket– male: any plug is OK if it is widely used in your country– 200V-240V, 20A or over

IEC C13 socket IEC C19 socket

for other Express5800 100series servers

for blade servers

Attention

Page 241: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

AC cable(Local Procurement)

Diameter: up to 15 [mm]

Tie Lap(All tie laps are provided as standard.)

• Please check the external form of the plug.

Up to 60 [mm]

Note: AC cable for SIGMABLADE-Hv2[N8405-040F/044F]

Note: AC cable for SIGMABLADE-Hv2[N8405-040F/044F]

Page 242: System Configuration Guide January 2012 - NEC · 2012. 3. 8. · Packs up to 16 CPU Blades in 10U-height chassis Features - Height: 10U - Supports up to 16 CPU Blades and eight switch

Note: AC cable for SIGMABLADE-MNote: AC cable for SIGMABLADE-M• AC inlet of the new blade server is IEC C20 plug• AC cable should be

– female: <IEC320 C19> socket– male: any plug is OK if it is widely used in your country– 200V-240V, 15A or over (1500W)– 200V-240V, 20A or over (2250W)

IEC C13 socket IEC C19 socket

for other Express5800 100series servers

for blade servers

Attention