35678951 CX4 Hardware and Operational Overview Master 1057710 (1)
-
Upload
vamshi-krishna -
Category
Documents
-
view
58 -
download
2
Transcript of 35678951 CX4 Hardware and Operational Overview Master 1057710 (1)
Here is Your Customized DocumentYour Configuration is:
Action to Perform - Learn about storage systemInformation Type - Hardware and operational overviewStorage-System Model - CX4-960
Reporting ProblemsTo send comments or report errors regarding this document, please email:[email protected]. For issues not related to this document, contactyour service provider.Refer to Document ID: 1057710
Content Creation Date 2010/1/31
Content Creation Date 2010/1/31
Content Creation Date 2010/1/31
CX4-960 Storage SystemsHardware and Operational
Overview
This document describes the hardware, powerup and powerdownsequences, and status indicators for the CX4-960 storage systems withUltraFlex™ technology.
Major topics are:
Storage-system major components.................................................. 2Storage processor enclosure (SPE)................................................... 4Disk-array enclosures (DAEs)......................................................... 12Standby power supplies (SPSs)....................................................... 18Powerup and powerdown sequence ............................................... 19Status lights (LEDs) and indicators ................................................. 25
1
Storage-system major components
The storage system consists of:
A storage processor enclosure (SPE)
Two standby power supplies (SPSs)
One Fibre Channel disk-array enclosure (DAE) with a minimumof five disk drives
Optional DAEs
A DAE is sometimes referred to as a DAE3P.
The high-availability features for the storage system include:
Redundant storage processors (SPs) configured with UltraFlex™I/O modules
Standby power supplies (SPS)
Redundant power supplies
Redundant cooling modules
The SPE is a highly available storage enclosure with redundant powerand cooling. It is 4U high (a U is a NEMA unit; each unit is 1.75 inches)and includes two storage processors (SPs), the power supplies, andthe cooling modules.
Each storage processor (SP) uses UltraFlex I/O modules to facilitate:
4 Gb/s and/or 8 Gb/s Fibre Channel connectivity, and 1 Gb/sand/or 10 Gb/s Ethernet connectivity through its front-end ports toWindows, VMware, and UNIX hosts
4 Gb/s Fibre Channel connectivity through its back-end ports to thestorage system’s disk-array enclosures (DAEs).
The SP senses the speed of the incoming host I/O and sets the speedof its front-end ports to the lowest speed it senses. The speed of theDAEs determine the speed of the back-end ports through which theyare connected to the SPs.
Table 1 gives the number of Fibre Channel and iSCSI I/O front-endports and Fibre Channel back-end ports supported for each SP. Thestorage system cannot have the maximum number of Fibre Channel
2 Hardware and Operational Overview
front-end ports and the maximum number of iSCSI front-end portslisted in Table 1. The actual number of Fibre Channel and iSCSIfront-end ports for an SP is determined by the number and type ofUltraFlex I/O modules in the storage system. For more information,refer to UltraFlex I/O modules, page 6 .
Table 1 Front-end and back-end ports per SP
Storage systemFibre Channel
front-end I/O portsiSCSI
front-end I/O portsFibre Channel
back-end disk ports
4 , 8, or 12 2, 4, 6, or 8 4CX4–960
4 or 8 2, 4, or 6 8
The storage system requires at least five disks and works in conjunctionwith one or more disk-array enclosures (DAEs) to provide terabytes ofhighly available disk storage. A DAE is a disk enclosure with slotsfor up to 15 Fibre Channel or SATA disks. The disks within the DAEare connected through a 4 Gb/s point-to-point Fibre Channel fabric.Each DAE connects to the SPE or another DAE with simple FC-ALserial cabling.
The CX4-960 storage system supports a total of 32 or 64 DAEs for a totalof 480 or 960 disks on its four or eight back-end buses, depending onwhether it is a CX4-960 storage system with the expansion hardwareand software options. Each bus supports as many as eight DAEsfor a total of 120 disks per bus. You can place the disk enclosuresin the same cabinet as the SPE, or in one or more separate cabinets.High-availability features are standard.
Hardware and Operational Overview 3
Storage processor enclosure (SPE)
The SPE components include:
A sheet-metal enclosure with a midplane and front bezel
Two storage processors (SP A and SP B), each consisting of one CPUmodule and an I/O carrier with slots for I/O modules
Two annex I/O carriers that provide two additional UltraFlex I/Omodule slots per SP.
Four cooling modules
Two power supplies
Two management modules – one associated with SP A and oneassociated with SP B. Each module has SPS, management, andservice connectors.
Figure 1 and Figure 2 show the SPE components. If the enclosureprovides slots for two identical components, the component inslot A is called component-name A. The second component is calledcomponent-name B. For increased clarity, the following figures depict theSPE outside of the rack cabinet. Your SPE may arrive installed in arackmount cabinet.
CL4133
Power supply BCooling modules A-DPower supply A
Figure 1 SPE components (front with bezel removed)
4 Hardware and Operational Overview
CL4132
0 1 2 3
0 1 2 3
CPU ACPU B
Annex I/Ocarrier A
Annex I/Ocarrier B
Managementmodule A
Managementmodule B
I/O carrier B I/O carrier A
Figure 2 SPE components (back)
Midplane
The midplane distributes power and signals to all the enclosurecomponents. The CPU modules, I/O modules, power supplies, andcooling modules plug directly into midplane connectors.
Front bezel
The front bezel has a key lock and two latch release buttons. Pressingthe latch release buttons releases the bezel from the enclosure.
Storage processors (SPs)
The SP is the SPE’s intelligent component and acts as the control center.Each SP includes:
One CPU module with:
Two quad-core processor
16 GB of DDR-II DIMM (double data rate, dual in-line memorymodule) memory
I/O module enclosure with four UltraFlex I/O module slots
One annex I/O carrier provides two additional UltraFlex I/O moduleslots for each SP. Although this I/O carrier is not actually part of the SP;it is associated with the SP.
Hardware and Operational Overview 5
One management module with:
One GbE Ethernet LAN port for management and backup (RJ45connector)
One GbE Ethernet LAN port for peer service (RJ45 connector)
One serial port for connection to a standby power supply (SPS)(micro DB9 connector)
One serial port for RS-232 connection to a service console (microDB9 connector)
UltraFlex I/O modules
Table 2 lists the number of I/O modules the storage system supportsand the slots the I/O modules can occupy. More slots are availablefor optional I/O modules than the maximum number of optional I/Omodules supported because some slots are occupied by required I/Omodules. With the exception of slots A0 and B0, the slots occupied bythe required I/O modules can vary between configurations. Figure3 shows the I/O module slot locations and the I/O modules for thestandard minimum configuration.
Table 2 Number of supported I/O modules per SP
All I/O modules Optional I/O modules
Storage systemNumber
supported per SP SP A slots SP B slotsNumber
supported per SP SP A slots SP B slots
Base CX4-960 6 A0-A5 B0-B5 3 A2-A5 B2-B5
Base CX4-960 with expansionoption (see note)
6 A0-A5 B0-B5 2 A3-A5 B3-B5
Note: A base CX4-960 with expansion option is a CX4-960 base storage system with the expansion option – model CX4-960EXPSW andmodel CX4-960EXPIO. This option provides 4 additional Fibre Channel back-end ports per SP.
6 Hardware and Operational Overview
CL4126
0 1 2 3
0 1 2 3
A0 A1 A2 A3
B0 B1 B2 B3
B4 B5 A4 A5
Figure 3 I/O module slot locations (I/O modules for a standard minimum configurationshown)
The following types of modules are available:
4 or 8 Gb Fibre Channel (FC) modules with either:
2 back-end (BE) ports for disk bus connections and 2 front-end(FE) ports for server I/O connections (connection to a switch orserver HBA).
or
4 front-end ports (FE) ports for server I/O connections(connection to a switch or server HBA).
The 8 Gb FC module requires FLARE 04.28.000.5.7xx or later.
4 Gb Fibre Channel (FC) modules with 4 back-end (BE) ports fordisk bus connections .
1 Gb Ethernet (1 GbE) or 10 Gb Ethernet (10 GbE) iSCSI modulewith 2 iSCSI front-end (FE) ports for network server iSCSI I/Oconnections (connection to a network switch, router, server NIC,or iSCSI HBA).
The 10 GbE iSCSI module requires FLARE 04.29 or later.
Hardware and Operational Overview 7
Table 3 lists the I/O modules available for the storage system and thenumber of each module that is standard and/or optional.
Table 3 I/O modules per SP
Number of modules per SP
Module Standard Optional
4 or 8 Gb FC module (see note1):
2 BE ports (0, 1)2 FE ports (2, 3)
2 0
4 or 8 Gb FC module (see note1):
4 BE ports (0, 1, 2, 3)
0 for base CX4-9601 for base CX4-960 withexpansion option (seenote 2)
0
4 or 8 Gb FC module:4 FE ports (0, 1, 2, 3)
0 1 to 2
1 or 10 GbE iSCSI module:2 FE ports (0, 1)
1 1 to 3 for base CX4-960 (seenote 3)1 to 2 for base CX4-960 withexpansion option (see note 2)
Note 1:In a storage system that shipped from the factory, the FC modules with BE ports areeither all 4 Gb FC modules or all 8 Gb FC modules.Note 2: A base CX4-960 with expansion option is a CX4-960 base storage system with theexpansion option – model CX4-960EXPSW and model CX4-960EXPIO. This option provides4 additional Fibre Channel back-end ports per SP.Note 3: The maximum number of 10 GbE iSCSI modules per SP is 2.
IMPORTANT
I/O modules are always installed in pairs – one module in SP A andone module in SP B. Both SPs must have the same type of I/O modulesin the same slots. Slots A0 and B0 always contain a Fibre ChannelI/O module with 2 back-end ports and 2 front-end ports. The otheravailable slots can contain any type of I/O module that is supportedfor the storage system.
The actual number of each type of optional Fibre Channel and iSCSII/O modules supported for a specific storage-system configurationis limited by the maximum number of Fibre Channel front-end (FE)
8 Hardware and Operational Overview
ports and iSCSI front-end ports supported for the storage system. Table4 lists the maximum number of Fibre Channel and iSCSI FE ports perSP for the storage system.
Table 4 Maximum number front-end (FE) ports per SP
Storage system
MaximumFibre Channel FE ports
per SP
MaximumiSCSI FE ports per SP
(see note)
Base CX4-960 12 8
Base CX4-960 with expansionoption
12 6
Note: The maximum number of 10 GbE iSCSI ports per SP is 4.
Back-end (BE) port connectivity
Each FC back-end port has a connector for a copper SFP-HSSDC2(small form factor pluggable to high speed serial data connector)cable. Back-end connectivity cannot exceed 4 Gb/s regardless of theI/O module’s speed. Table 5 lists the FC modules that support theback-end buses.
Table 5 FC I/O module ports supporting back-end buses
Storage system and FC modules Back-end bus (module port)
Base CX4-960
FC module in slots A0 and B0 Bus 0 (port 0)Bus 1 (port 1)
FC module with both BE and FE ports, usually in slots A1and B1
Bus 2 (port 0)Bus 3 (port 1)
Hardware and Operational Overview 9
Storage system and FC modules Back-end bus (module port)
Base CX4-960 with expansion option (see note)
FC module in slots A0 and B0 Bus 0 (port 0)Bus 1 (port 1)
FC module with both BE and FE ports, usually in slots A1and B1
Bus 2 (port 0)Bus 3 (port 1)
FC module with only BE ports, usually in slots A2 and B2‘ Bus 4 (port 0)Bus 5 (port 1)Bus 6 (port 2)Bus 7 (port 3)
Notes:A base CX4-960 with expansion option is a CX4-960 base storage system with the expansionoption – model CX4-960EXPSW and model CX4-960EXPIO. This option provides 4 additionalFibre Channel back-end ports per SP.
Fibre Channel (FC) front-end connectivity
Each FC front-end port has an SFP shielded Fibre Channel connectorfor an optical cable. The FC front-end ports on a 4 Gb FC modulesupport 1, 2, or 4 Gb/s connectivity, and the FC front-end ports onan 8 Gb FC module support 2, 4, or 8 Gb/s connectivity. You cannotuse the FC front-end ports on an 8 Gb FC module in a 1 Gb/s FibreChannel environment. You can use the FC front-end ports on a 4 Gb FCmodule in an 8 Gb/s Fibre Channel environment if the FC switch orHBA ports to which the module’s FE ports connect auto-adjust theirspeed to 4 Gb/s.
iSCSI front-end connectivity
Each iSCSI front-end port on a 1 GbE iSCSI module has a 1GBaseTcopper connector for a copper Ethernet cable, and can auto-adjustits speed to 10 Mb/s, 100 Mb/s, or 1 Gb/s. Each iSCSI front-endport on a 10 GbE iSCSI module has an SFP shielded connector for anoptical Ethernet cable, and runs at a fixed 10 Gb/s speed. Because the1 GbE and the 10 GbE Ethernet iSCSI connection topologies are not
10 Hardware and Operational Overview
interoperable, the 1 GbE and the 10 GbE iSCSI modules cannot operateon the same physical network.
Power supplies
A power supply is located on each side of the SPE. Each power supplyis an auto-ranging, power-factor-corrected, multi-output, offlineconverter with its own line cord and power switch. Each supplysupports the SPE and shares load currents with the other supply.
An SP or power supply with power-related faults does not adverselyaffect the operation of any other component. You can replace a failedpower supply while the SPE is powered up.
SPE field-replaceable units (FRUs)
The following are field-replaceable units (FRUs) that you can replacewhile the system is powered up:
Power supplies
Cooling modules
Management modules
SFP modules, which plug into the Fibre Channel front-end portconnectors in the Fibre Channel I/O modules
Contact your service provider to replace a failed CPU board, CPUmemory module, or I/O module.
Hardware and Operational Overview 11
Disk-array enclosures (DAEs)
DAE UltraPoint™ (sometimes called "point-to-point") disk-arrayenclosures are highly available, high-performance, high-capacitystorage-system components that use a Fibre Channel Arbitrated Loop(FC-AL) as the interconnect interface. A disk enclosure connects toanother DAE or an SPE and is managed by storage-system softwarein RAID (redundant array of independent disks) configurations.The enclosure is only 3U (5.25 inches) high, but can include 15 harddisk drive/carrier modules. Its modular, scalable design allows foradditional disk storage as your needs increase.
A DAE includes either high-performance Fibre Channel disk modulesor economical SATA (Serial Advanced Technology Attachment, SATAII) disk modules. CX4-960 systems also support solid state disk (SSD)Fibre Channel modules, also known as enterprise flash drive (EFD)Fibre Channel modules. You cannot mix SATA and Fibre Channelcomponents within a DAE, but you can integrate and connect FC andSATA enclosures within a storage system. The enclosure operates ateither a 2 or 4 Gb/s bus speed (2 Gb/s components, including disks,cannot operate on a 4 Gb/s bus).
Simple serial cabling provides easy scalability. You can interconnectdisk enclosures to form a large disk storage system; the number andsize of buses depends on the capabilities of your storage processor. Youcan place the disk enclosures in the same cabinet, or in one or moreseparate cabinets. High-availability features are standard in the DAE.
The DAE includes the following components:
A sheet-metal enclosure with a midplane and front bezel
Two FC-AL link control cards (LCCs) to manage disk modules
As many as 15 disk modules
Two power supply/system cooling modules (referred to aspower/cooling modules)
Any unoccupied disk module slot has a filler module to maintain airflow.
The power supply and system cooling components of thepower/cooling modules function independently of each other, but theassemblies are packaged together into a single field-replaceable unit(FRU).
12 Hardware and Operational Overview
The LCCs, disk modules, power supply/system cooling modules,and filler modules are field-replaceable units (FRUs), which can beadded or replaced without hardware tools while the storage systemis powered up.
Figure 4 shows the disk enclosure components. Where the enclosureprovides slots for two identical components, the components are calledcomponent-name A or component-name B, as shown in the illustrations.
For increased clarity, the following figures depict the DAE outside of the rackor cabinet. Your DAEs may arrive installed in a rackmount cabinet along withthe SPE.
!!
!!
!
EXP PRI
EXP PRI
#
!
EXPPRI
EXPPRI
#A
B
Power LED(green or blue)
Fault LED(amber)Power/cooling module B Link control card B
Fault LED(amber)
Disk activityLED (green)
Power/cooling module A Link control card AEMC3437
Figure 4 DAE outside the cabinet — front and rear views
As shown in Figure 5, an enclosure address (EA) indicator is located oneach LCC. (The EA is sometimes referred to as an enclosure ID.) Eachlink control card (LCC) includes a bus (loop) identification indicator.The storage processor initializes bus ID when the operating system isloaded.
Hardware and Operational Overview 13
!!
!!
!
EXP PRI
EXP PRI
#
!
EXPPRI
EXPPRI
#A
B
0 1 2 3
4 5 6 7
0 1 2 3
4 5 6 7
Bus IDEnclosureaddress
#
EA selection(press here tochange EA)
EMC3210
Figure 5 Disk enclosure bus (loop) and address indicators
The enclosure address is set at installation. Disk module IDs arenumbered left to right (looking at the front of the unit) and arecontiguous throughout a storage system: enclosure 0 contains modules0-14; enclosure 1 contains modules 15-29; enclosure 2 includes 30-44,and so on.
Midplane
A midplane between the disk modules and the LCC and power/coolingmodules distributes power and signals to all components in theenclosure. LCCs, power/cooling modules, and disk drives – theenclosure’s field-replaceable units (FRUs) – plug directly into themidplane.
Front bezel
The front bezel has a locking latch and an electromagnetic interference(EMI) shield. You must remove the bezel to remove and install drivemodules. EMI compliance requires a properly installed front bezel.
Link control cards (LCCs)
An LCC supports and controls one Fibre Channel bus and monitorsthe DAE.
14 Hardware and Operational Overview
!!
!!
!
EXP PRI
EXP PRI
#
!
EXPPRI
EXPPRI
#A
B
EMC3226
Expansion linkactive LED
Primary linkactive LED
Fault LED (amber)
Power LED (green)
!
EXP PRI
EXP PRI
Figure 6 LCC connectors and status LEDs
A blue link active LED indicates a DAE enclosure operating at 4 Gb/s. The linkactive LED(s) is green in a DAE operating at 2 Gb/s.
The LCCs in a DAE connect to other Fibre Channel devices (processorenclosures, other DAEs) with twin-axial copper cables. The cablesconnect LCCs in a storage system together in a daisy-chain (loop)topology.
Internally, each DAE LCC uses FC-AL protocols to emulate a loop;it connects to the drives in its enclosure in a point-to-point fashionthrough a switch. The LCC independently receives and electricallyterminates incoming FC-AL signals. For traffic from the system’s storageprocessors, the LCC switch passes the input signal from the primaryport (PRI) to the drive being accessed; the switch then forwards thedrive’s output signal to the expansion port (EXP), where cables connectit to the next DAE in the loop. (If the target drive is not in the LCC’senclosure, the switch passes the input signal directly to the EXP port.)At the unconnected expansion port (EXP) of the last LCC, the outputsignal (from the storage processor) is looped back to the input signalsource (to the storage processor). For traffic directed to the system’sstorage processors, the switch passes input signals from the expansionport directly to the output signal destination of the primary port.
Each LCC independently monitors the environmental statusof the entire enclosure, using a microcomputer-controlled FRU(field-replaceable unit) monitor program. The monitor communicates
Hardware and Operational Overview 15
status to the storage processor, which polls disk enclosure status.LCC firmware also controls the LCC port-bypass circuits and thedisk-module status LEDs.
LCCs do not communicate with or control each other.
Captive screws on the LCC lock it into place to ensure properconnection to the midplane. You can add or replace an LCC while thedisk enclosure is powered up.
Disk modules
Each disk module consists of one disk drive in a carrier. You canvisually distinguish between module types by their different latchand handle mechanisms and by type, capacity, and speed labels oneach module. An enclosure can include Fibre Channel or SATA diskmodules, but not both types. You can add or remove a disk modulewhile the DAE is powered up, but you should exercise special carewhen removing modules while they are in use. Drive modules areextremely sensitive electronic components.
Disk drivesThe DAE supports Fibre Channel and SATA disks. The Fibre Channel(FC) disks, including enterprise flash (SSD) versions, conform to FC-ALspecifications and 4 Gb/s Fibre Channel interface standards, andsupport dual-port FC-AL interconnects through the two LCCs. SATAdisks conform to Serial ATA II Electrical Specification 1.0 and includedual-port SATA interconnects; a paddle card on each drive converts theassembly to Fibre Channel operation. The disk module slots in theenclosure accommodate 2.54 cm (1 in) by 8.75 cm (3.5 in) disk drives.
The disks currently available for the storage system and the usablecapacities for disks are listed in the EMC® CX4 Series Storage Systems –Disk and FLARE® OE Matrix (P/N 300-007-437) on the EMC Powerlinkwebsite. The vault disks must all have the same capacity and samespeed. The 1 TB, 5.4K rpm SATA disks are available only in a DAE thatis fully populated with these disks. Do not intermix 1 TB, 5.4K rpmSATA disks with 1 TB, 1.2K rpm SATA disks in the same DAE, and donot replace a 1 TB, 5.4K rpm SATA disk with a 1 TB, 1.2K rpm SATAdisk, or vice versa.
16 Hardware and Operational Overview
The 1 TB SATA disks operate on a 4 Gb/s back-end bus like the 4 Gb FC disks,but have a 3 Gb/s bandwidth. Since they have a Fibre Channel interface to theback-end loop, these disks are sometimes referred to as Fibre Channel disks.
Disk power savingsSome disks support power savings, which lets you assign power savingsettings to these disks in a storage system running FLARE version04.29.000.5.xxx or later, so that these disks transition to the low powerstate after being idle for at least 30 minutes. For the currently availabledisks that support power savings, refer to the EMC® CX4 Series StorageSystems – Disk and FLARE® OE Matrix (P/N 300-007-437) on the EMCPowerlink website.
Drive carrierThe disk drive carriers are metal and plastic assemblies that providesmooth, reliable contact with the enclosure slot guides and midplaneconnectors. Each carrier has a handle with a latch and spring clips.The latch holds the disk module in place to ensure proper connectionwith the midplane. Disk drive activity/fault LEDs are integrated intothe carrier.
Power/cooling modules
The power/cooling modules are located above and below the LCCs.The units integrate independent power supply and dual-blowercooling assemblies into a single module.
Each power supply is an auto-ranging, power-factor-corrected,multi-output, offline converter with its own line cord. Each supplysupports a fully configured DAE and shares load currents with theother supply. The drives and LCCs have individual soft-start switchesthat protect the disk drives and LCCs if they are installed while thedisk enclosure is powered up. A FRU (disk, LCC, or power/coolingmodule) with power-related faults does not adversely affect theoperation of any other FRU.
The enclosure cooling system includes two dual-blower modules.If one blower fails, the others will speed up to compensate. If twoblowers in a system (both in one power/cooling module, or one in eachmodule) fail, the DAE goes offline within two minutes.
Hardware and Operational Overview 17
Standby power supplies (SPSs)
Two 2U 2200-watt DC SPSs provide backup power for one SPE andthe first (enclosure 0, bus 0) DAE adjacent to it. The two SPSs providehigh availability and allow write caching — which prevents data lossduring a power failure — to continue. A faulted or not fully chargedSPS disables write caching. Each SPS rear panel has one AC inlet powerconnector with power switch, AC outlets for the SPE and the firstDAE (EA 0, bus 0) respectively, and one phone-jack type connector forconnection to a management module. Figure 7 shows the location ofthe SPS connectors.
Online Battery onReplace batterySPS fault
LEDs:
AC infromPDU
Powerswitch
RJ12 to RS-232interface on SPE
AC out to SPE and DAEEA0, bus 0
SPS B SPS A
EMC3365
Figure 7 2200 W SPS connectors
A service provider can replace an SPS while the storage system ispowered up.
18 Hardware and Operational Overview
Powerup and powerdown sequence
The SPE and DAE do not have power switches.
Powering up the storage system
1. Verify the following:
❑ Master switch/circuit breakers for each cabinet/rack powerstrip are off.
❑ The two power cords for the SPE are plugged into the SPSs andthe power cord retention bails are in place.
❑ The two power cords for the SPE power supplies are pluggedinto the SPSs and the power cord retention bails are in place.
❑ Serial connections between the SPE management modules andthe SPSs are in place.
❑ Power cords for the first DAE (EA 0, bus 0; often called theDAE-OS) are plugged into the SPSs and the power cordretention bails are in place.
❑ The power cords for the SPSs and any other DAEs are pluggedinto the cabinet’s power strips.
❑ The power switches on the SPSs are in the on position.
❑ Any other devices in the cabinet are correctly installed andready for powerup.
2. Turn on the master switch/circuit breakers for each cabinet/rackpower strip.
In the 40U-C cabinet, master switches are on the power distributionpanels (PDPs), as shown in Figure 8 and Figure 9.
Hardware and Operational Overview 19
Each AC circuit in the 40U-C cabinet requires a source connection that cansupport a minimum of 4800 VA of single phase, 200-240 V AC input power.For high availability, the left and right sides of the cabinet must receivepower from separate branch feed circuits. Each pair of power distributionpanels (PDP) in the 40U-C cabinet can support a maximum of 24 A ACcurrent draw from devices connected to its power distribution units (PDU).Most cabinet configurations draw less than 24 A AC power, and requireonly two discrete 240 V AC power sources. If the total AC current drawof all the devices in a single cabinet exceeds 24 A, the cabinet requires twoadditional 240 V power sources to support a second pair of PDPs. Use thepublished technical specifications and device rating labels to determine thecurrent draw of each device in your cabinet and calculate the total.
20 Hardware and Operational Overview
ON I
OFF O
ON I
OFF O
ON I
OFF O
ON I
OFF O
ON I
OFF O
ON I
OFF O
ON I
OFF O
ON I
OFF O
ON I
OFF O
ON I
OFF O
ON I
OFF O
ON I
OFF O
! !
! !
!
EXP PRI
EXP PRI
#
!
EXP PRI
EXP PRI
# A
B
! !
! !
!
EXP PRI
EXP PRI
#
!
EXP PRI
EXP PRI
# A
B
! !
! !
!
EXP PRI
EXP PRI
#
!
EXP PRI
EXP PRI
# A
B
! !
! !
!
EXP PRI
EXP PRI
#
!
EXP PRI
EXP PRI
# A
B
! !
! !
!
EXP PRI
EXP PRI
#
!
EXP PRI
EXP PRI
# A
B
! !
! !
!
EXP PRI
EXP PRI
#
!
EXP PRI
EXP PRI
# A
B
! !
! !
!
EXP PRI
EXP PRI
#
!
EXP PRI
EXP PRI
# A
B
! !
! !
!
EXP PRI
EXP PRI
#
!
EXP PRI
EXP PRI
# A
B
! !
Master switch Master switch
SPS switchSPS switch
Power source A Power source B CL4131
DAE-OS
SPE
Figure 8 PDP master switches and power sources in the 40U-C cabinet with two PDPsused
Hardware and Operational Overview 21
ON I
OFF O
ON I
OFF O
ON I
OFF O
ON I
OFF O
ON I
OFF O
ON I
OFF O
ON I
OFF O
ON I
OFF O
ON I
OFF O
ON I
OFF O
ON I
OFF O
ON I
OFF O
! !
! !
!
EXP PRI
EXP PRI
#
!
EXP PRI
EXP PRI
# A
B
! !
! !
!
EXP PRI
EXP PRI
#
!
EXP PRI
EXP PRI
# A
B
! !
! !
!
EXP PRI
EXP PRI
#
!
EXP PRI
EXP PRI
# A
B
! !
! !
!
EXP PRI
EXP PRI
#
!
EXP PRI
EXP PRI
# A
B
! !
! !
!
EXP PRI
EXP PRI
#
!
EXP PRI
EXP PRI
# A
B
! !
! !
!
EXP PRI
EXP PRI
#
!
EXP PRI
EXP PRI
# A
B
! !
! !
!
EXP PRI
EXP PRI
#
!
EXP PRI
EXP PRI
# A
B
! !
! !
!
EXP PRI
EXP PRI
#
!
EXP PRI
EXP PRI
# A
B
! !
Power source A
Power source C Power source D
Power source B
CL4130
Master switch Master switch
Master switchMaster switch
SPS switchSPS switch
DAE-OS
SPE
Figure 9 PDP master switches and power sources in the 40U-C cabinet with four PDPs
The storage system can take 8 to 10 minutes to complete a typicalpowerup. Amber warning LEDs flash during the power on self-test(POST) and then go off. The front fault LED and the SPS recharge LEDscommonly stay on for several minutes while the SPSs are charging.The powerup is complete when the CPU power light on each SP issteady green.
22 Hardware and Operational Overview
The CPU status lights are visible from the rear of the SPE (Figure 10).
CL4094
Figure 10 Location of CPU lights
If amber LEDs on the front or back of the storage system remain on formore than 10 minutes, make sure the storage system is correctly cabled,and then refer to the troubleshooting flowcharts for your storagesystem on the CLARiiON Tools page on the EMC Powerlink website(http://Powerlink.EMC.com). If you cannot determine any reasons forthe fault, contact your authorized service provider.
Powering down the storage system
1. Stop all I/O activity to the SPE. If the server connected to the SPE isrunning the AIX, HP-UX, Linux, or Solaris operating system, backup critical data and then unmount the file systems.
Stopping I/O allows the SP to destage cache data, and may takesome time. The length of time depends on criteria such as the sizeof the cache, the amount of data in the cache, the type of data inthe cache, and the target location on the disks, but it is typicallyless than one minute. We recommend that you wait five minutesbefore proceeding.
2. After five minutes, use the power switch on each SPS to turnoff power. The SPE and primary DAE power down within twominutes.
Hardware and Operational Overview 23
! CAUTION
Never unplug the power supplies to shut down an SPE.Bypassing the SPS in that manner prevents the storage systemfrom saving write cache data to the vault drives, and resultsin data loss. You will lose access to data, and the storageprocessor log displays an error message similar to the following:
Enclosure 0 Disk 5 0x90a (Can’t Assign - Cache Dirty)0 0xafb40 0x14362c
Contact your service provider if this situation occurs.
This turns off power to the SPE and the first DAE (EA 0, bus 0). You donot need to turn off power to the other connected DAEs.
24 Hardware and Operational Overview
Status lights (LEDs) and indicators
Status lights made up of light emitting diodes (LEDs) on the SPE, itsFRUs, the SPSs, and the DAEs and their FRUs indicate the components’current status.
Storage processor enclosure (SPE) LEDs
This section describes status LEDs visible from the front and the rearof the SPE.
SPE front status LEDs
Figure 11 and Figure 12 show the location of the SPE status LEDs thatare visible from the front of the enclosure. Table 6 and Table 7 describethese LEDs.
CLARiiON CX4-960
CL4090
Figure 11 SPE front status LEDs (bezel in place)
Hardware and Operational Overview 25
CL4091
Power supply A Power supply BCooling modules
Figure 12 SPE front status LEDs (bezel removed)
Table 6 Meaning of the SPE front status LEDs (bezel in place)
LED Symbol Quantity State Meaning
Off SPE is powered down.Power 1
Solid blue SPE is powered up.
Off SPE is operating normally.System fault 1
Solid amber A fault condition exists in the SPE. If the fault is not obvious fromanother fault LED on the front, look at the rear of the enclosure.
Table 7 Meaning of the SPE front status LEDs (bezel removed)
LED Symbol Quantity State MeaningOff Power supply is not powered up.Power supply
powerNone 1 per supply
Solid green Power supply is powered up and operating normally.
Solid amber Power supply is faulted.Power supplyfault
or
1 per supply
Blinking amber Fault condition exists external to power supply, such as SPremoved, no AC power input, ambient over-temperaturecondition.
26 Hardware and Operational Overview
LED Symbol Quantity State MeaningOff Blower is not powered up.
Solid green Blower is powered up and operating normally.
Cooling modulefault
1 per blower
Solid amber Blower is faulted.
SPE rear status LEDs
Figure 13 shows the status LEDs that are visible from the rear of theSPE2. Table 8 describes the status LEDs that are visible from the rearof the SPE.
CL4094
Figure 13 SPE rear status LEDs
Table 8 Meaning of the SPE rear status LEDs
LED Symbol Quantity State Meaning
Off CPU is not powered.CPU power 1 per CPU
Solid green CPU is powered and operating normally.
Blinking amber Running powerup tests.
Solid amber CPU is faulted.
Blinking blue OS loaded
CPU fault 1 per CPU
Solid blue CPU degraded
Unsafe to remove 1 per CPU Solid white DO NOT REMOVE MODULE while this light is on.
Hardware and Operational Overview 27
LED Symbol Quantity State Meaning
Solid green Power is being supplied to module.
Off Power is not being supplied to module.
I/O module status (seenote 1)
None 1 per module
Amber Module is faulted.
Off No link because of one of the following conditions:the cable is disconnected, the cable is faulted or itis not a supported type.
Solid green 1 Gb/s or 2 Gb/s link speed.
Solid blue 4 Gb/s link speed.
BE port link (see note 2) None 1 per Fibre Channelback-end port
Blinking greenthen blue
Cable fault.
Off No link because of one of the following conditions:the host is down, the cable is disconnected, an SFPis not in the port slot, the SFP is faulted or it is not asupported type.
Solid green 1 Gb/s or 2 Gb/s link speed.
Solid blue 4 Gb/s link speed.
FE port link (see note 2) None 1 per Fibre Channelfront-end port
Blinking greenthen blue
SFP or cable fault.
Note 1: LED is on the module handle.Note 2: LED is next to the port connector.
DAE status LEDs
This section describes the following status LEDs and indicators:
Front DAE and disk modules status LEDs
Enclosure address and bus ID indicators
LCC and power/cooling module status LEDs
Front DAE and disk modules status LEDs
Figure 14 shows the location of the DAE and disk module status LEDsthat are visible from the front of the enclosure. Table 9 describes theseLEDs.
28 Hardware and Operational Overview
EMC3422
Power LED(Green or Blue)
Fault LED(Amber)
Fault LED(Amber)
Disk Activity LED(Green)
Figure 14 Front DAE and disk modules status LEDs (bezel removed)
Hardware and Operational Overview 29
Table 9 Meaning of the front DAE and disk module status LEDs
LED Quantity State Meaning
Off DAE is not powered up.
Solid green DAE is powered up and back-end bus is running at 2Gb/s.
DAE power 1
Solid blue DAE is powered up and back-end bus is running at 4Gb/s.
DAE fault 1 Solid amber On when any fault condition exists; if the fault is notobvious from a disk module LED, look at the back of theenclosure.
Off Slot is empty or contains a filler module or the disk ispowered down by command, for example, as the result ofa temperature fault.
Solid green Drive has power but is not handling any I/O activity (theready state).
Blinking green, mostly on Drive is spinning and handling I/O activity.
Blink green at a constantrate
Drive is spinning up or spinning down normally.
Disk activity 1 per disk module
Blinking green, mostly off Drive is powered up but not spinning; this is a normal partof the spin-up sequence, occurring during the spin-updelay of a slot.
Disk fault 1 per disk module Solid amber On when the disk module is faulty, or as an indicationto remove the drive.
Enclosure address and bus ID indicators
Figure 15 shows the location of the enclosure address and bus IDindicators that are visible from the rear of the enclosure. In this example,the DAE is enclosure 2 on bus (loop) 1; note that the indicators for LCCA and LCC B always match. Table 10 describes these indicators.
30 Hardware and Operational Overview
!!
!!
!
EXP PRI
EXP PRI
#
!
EXPPRI
EXPPRI
#A
B
0 1 2 3
4 5 6 7
0 1 2 3
4 5 6 7
Bus IDEnclosureaddress
#
EAselection
0123
4567
0123
4567Bus IDEnclosure
address
#
EAselection
EMC3178
Figure 15 Location of enclosure address and bus ID indicators
Table 10 Meaning of enclosure address and bus ID indicators
LED Quantity State Meaning
Enclosure address 8 Green Displayed number indicates enclosure address.
Bus ID 8 Blue Displayed number indicates bus ID. Blinking bus IDindicates invalid cabling – LCC A and LCC B are notconnected to the same bus or the maximum number ofDAEs allowed on the bus is exceeded.
DAE power/cooling module status LEDs
Figure 16 shows the location of the status LEDs for the powersupply/system cooling modules (referred to as power/cooling modules).Table 11 describes these LEDs.
Hardware and Operational Overview 31
!!
!!
!
EXP PRI
EXP PRI
#
!
EXPPRI
EXPPRI
#A
B
!!
Power LED (green)Power fault LED (amber)Blower fault LED (amber)
EMC3179
!!
Power LED (green)Power fault LED (amber)Blower fault LED (amber)
Figure 16 DAE power/cooling module status LEDs
Table 11 Meaning of DAE power/cooling module status LEDs
LEDs Quantity State Meaning
Power supply active 1 per supply Green On when the power supply is operating.
Power supply fault(see note)
1 per supply Amber On when the power supply is faulty or is not receiving ACline voltage. Flashing when either a multiple blower orambient over-temperature condition has shut off power tothe system.
Blower fault (seenote)
1 per cooling module Amber On when a single blower in the power supply is faulty.
Note: The DAE continues running with a single power supply and three of its four blowers. Removing a power/cooling module constitutesa multiple blower fault condition, and will power down the enclosure unless you replace a blower within two minutes.
DAE LCC status LEDs
Figure 17 shows the location of the status LEDs for a link control card(LCC). Table 12 describes these LEDs.
32 Hardware and Operational Overview
!!
!!
!
EXP PRI
EXP PRI
#
!
EXPPRI
EXPPRI
#A
B
EMC3184
Expansion linkactive LED (2 Gb/s - green4 Gb/s - blue)
Primary linkactive LED (green or blue)
Fault LED (amber)
Power LED (green)
!
EXP PRI
EXP PRI
Expansion linkactive LED
Primary linkactive LED
Fault LED (amber)
Power LED (green)
!
EXPPRI
EXPPRI
Figure 17 DAE LCC status LEDs
Table 12 Meaning of DAE LCC status LEDs
Light Quantity State Meaning
LCC power 1 per LCC Green On when the LCC is powered up.
LCC fault 1 per LCC Amber On when either the LCC or a Fibre Channel connectionis faulty. Also on during power on self test (POST).
Green On when 2 Gb/s primary connection is active.Primary link active 1 per LCC
Blue On when 4 Gb/s primary connection is active.
Green On when 2 Gb/s expansion connection is active.Expansion linkactive
1 per LCC
Blue On when 4 Gb/s expansion connection is active.
SPS status LEDs
Figure 18 shows the location of SPS status LEDs that are visible fromrear. Table 13 describes these LEDs.
Hardware and Operational Overview 33
Online Battery onReplace batterySPS fault
LEDs:
AC infromPDU
Powerswitch
RJ12 to RS-232interface on SPE
AC out to SPE and DAEEA0, bus 0
SPS B SPS A
EMC3365
Figure 18 2200 W SPS status LEDs
Table 13 Meaning of 2200 W SPS status LED
LED Quantity State Meaning
Online 1 per SPS Green When this LED is steady, the SPS is ready and operating normally. Whenthis LED flashes, the batteries are being recharged. In either case, theoutput from the SPS is supplied by AC line input.
Battery on 1 per SPS Amber The AC line power is no longer available and the SPS is supplying outputpower from its battery. When battery power comes on, and no other onlineSPS is connected to the SPE, the storage system writes all cached data todisk, and the event log records the event.
Replace battery 1 per SPS Amber The SPS battery is not fully charged and may not be able to serve its cacheflushing function. With the battery in this state, and no other online SPSconnected to the SPE, the storage system disables write caching, writingany modified pages to disk first.
SPS fault 1 per SPS Amber The SPS has an internal fault. The SPS may still be able to run online, butwrite caching cannot occur.
34 Hardware and Operational Overview
Copyright © 2008, 2009 EMC Corporation. All Rights Reserved.
EMC believes the information in this publication is accurate as of its publication date. Theinformation is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATIONMAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TOTHE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIEDWARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires anapplicable software license.
For the most up-to-date regulatory document for your product line, go to the TechnicalDocumentation and Advisories section on EMC Powerlink.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks onEMC.com.
All other trademarks mentioned herein are the property of their respective owners.
Hardware and Operational Overview 35