UCS for Dummies v0 05

download UCS for Dummies v0 05

of 30

description

UCS for Dummies v0 05

Transcript of UCS for Dummies v0 05

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 1 of 30

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 2 of 30

    Table of Contents

    1 Document Information 3 1.1 Key Contacts 3

    1.2 Document Control 3

    2 Document Purpose 4

    3 Why should I care about Cisco UCS? 5

    4 Cisco UCS Components 6

    5 Topology Diagram 13

    6 The 5 Karate Moves of Cisco UCS 14

    7 Vblock 19

    8 What to look out for that could indicate there is a potential UCS opportunity? 21

    9 Pre-empting Customer Concerns. 22

    10 Cisco UCS vs. The Competition 23 10.1 Cisco UCS Vs HP Bladesystem Matrix 23

    10.2 Cisco UCS Vs IBM BladeCentre 26

    11 Glossary of Terms 27

    Appendix A B Series UCS Blade Servers 29

    Appendix B C Series UCS Servers 30

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 3 of 30

    1 Document Information

    1.1 Key Contacts

    Name Role Contact Details

    Colin Lynch Senior Technical Consultant [email protected]

    1.2 Document Control

    Contents Details

    Document Name The CC Guide to UCS (formally the Dummies Guide / AKA the little book of UCS)

    Client Strictly Computacenter Internal use only

    Version Number V0.1 Intial Draft

    V0.2 Quality Reviewed by Peter Green

    V0.3 UCS vs The Competition and Vblock chapters added

    V0.4 Peer Reviewed by David Roberts and Darren Franklin

    V0.5 Updated to include UCSM 1.4 (Balboa)

    Document Version Date 7th February 2011

    Author Colin Lynch

    Classification Draft Internal

    Document References

    Quality Review By Peter Green (v0.2)

    Peer Review By

    Template Tempo - Document Template.dotm Version 2.32

    Notice

    This document and the information it contains is confidential and remain the property of Computacenter (UK) Ltd. The document may not be reproduced or the contents transmitted to any third party without the express consent of Computacenter (UK) Ltd.

    In the absence of any specific provision, this document has consultative status only. It does not constitute a contract between Computacenter and any other party. Furthermore, Computacenter does not accept liability for the contents of the document, although it has used reasonable endeavours to ensure accuracy and correct understanding.

    Unless expressly forbidden, Computacenter may transmit this document via email or other unencrypted electronic means.

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 4 of 30

    2 Document Purpose

    This document has purposely been written in an informal manner to keep within the spirit of the title.

    The purpose of this document is to provide a simplified overview of the Cisco Unified Computing System (UCS). If you would like a greater level of detail on any aspect of Cisco UCS or have any questions please contact the author [email protected]

    The intended audience should have a basic knowledge of data communications but no in depth technical knowledge in any particular area is required.

    Hopefully there is something in this document for everyone including:

    Techys who wish to get an overview of what the Cisco UCS is.

    Sales Specialists who require an overview of the technology.

    Salespeople who need to know how and where best to position Cisco UCS (Just skip

    the Techy bits)

    By the end of this document you should be able to talk sensibly about Cisco UCS and understand ALL the key concepts and differentiators* that make Cisco UCS such a valuable proposition for our customers. All these key points have been condensed into the 5 Karate moves of UCS.

    The UCS family can be divided into two groups:

    1. The C series which is a rack mount form factor (see Appendix B)

    2. The B series which is a blade architecture (see Appendix A)

    This document will primarily reference the B Series although most concepts discussed are the same for both.

    * Hopefully this is the last time that a fifteen letter word appears in this document.

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 5 of 30

    3 Why should I care about Cisco UCS?

    Good question!

    There are several reasons as to why you need to care about Cisco UCS.

    Cisco UCS represents the next generation of Data centre technology

    There are significant OPEX savings to be had by deploying Cisco UCS

    Computacenter have partnered with Cisco to make Cisco UCS a very competitive and compelling offering to our customers.

    The details of this deal can be seen in the video presentation by Simon Walsh and Cisco on Browsaplus.

    http://www.browzaplus.co.uk/index.html

    In addition to the above our customers would and should expect us to care about UCS. Put simply Computacenter cant afford not to care about Cisco UCS.

    As pointed out to us in the Lord of the rings films The world is changing and this is especially true in the data centre, and those who dont adapt to these changes will be analogue people in a digital age.

    There is scarcely an IT article that comes out these days that does not evangelise about cloud computing, greener data centres, higher consolidation and virtualisation in the data centre, whilst at the same time driving up data centre efficiency, driving down costs and increasing availability.

    Cisco UCS was developed from the ground up to address all of the above concerns and requirements.

    Put simply the Cisco Unified Computing System is a next-generation data centre platform that unites compute, network, storage access, and virtualization into a cohesive system designed to reduce total cost of ownership (TCO) and increase business agility by unifying all components at the natural point of aggregation which is the network.

    Cisco UCS is certainly not a Me too offering from Cisco with regards to getting involved in the data centre systems business, but an opportunity brought on by the inflection point in the market, driven by 10Gb Ethernet. As we will cover during the course of this book Cisco UCS changes the rules which have thus far governed and to a degree limited the data centre.

    The Unified Computing System answers a simple question what would you do if you could build a system with no preconceptions.

    That same question has been asked many times over the years by Cisco. The results have given us data centre legends such as the Catalyst 6500 line of switches, the Cisco MDS storage line, as well as the current Nexus 7000/5000/2000/1000V family of switches.

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 6 of 30

    4 Cisco UCS Components

    First off a quick run down of the bits that makes up the Cisco Unified Computing System.

    1 - 2 Fabric Interconnects

    1 40 chassis with 1 - 2 I/O modules each

    1 8 blades per chassis.

    The relevant components are all numbered in figure 2 and explained in more detail below.

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 7 of 30

    1) UCS 5108 Chassis

    This is what some people incorrectly refer to as a UCS

    UCS is the combination of all components which is why Cisco calls it the Unified Computing System.

    The component labelled number 1 in the diagram and shown below is the UCS 5108 Blade chassis which houses 1- 8 UCS Blade servers and up to 2 I/O modules. The chassis is 6U in height and can hold 8 half width blades or 4 full width blades or a combination of both. The chassis can also house up to 4 power supplies and 8 fan modules. The system will operate on two power supplies (2+2 redundancy) but 4 power supplies should always be installed as to provide redundancy for a power feed (Grid) failure.

    Cisco UCS 5108 Blade Chassis populated with 8 x half width blades.

    UCS server blades or Compute nodes have 2 or 4 Intel CPU sockets, internal SAS drive bays, slots for 1 or 2 mezzanine adapters, and DDR3 DIMM slots.

    Half width blades have 2 CPU sockets, 12 DIMM slots and take one mezzanine adapter.

    Full width blades have 2 or 4 CPU sockets, up to 48 DIMM slots allowing for up to 384GB of RAM per dual socket blade and 512GB of RAM in a quad socket blade. The full width blades can take two mezzanine adapters.

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 8 of 30

    Converged Network Adapters (CNAs): These are small daughter boards that are installed in each blade and provide the I/O connectivity. There are currently three main types of CNA (also referred to as mezzanine adapter cards) available.

    M81KR also referred to as the VIC (Virtual interface card) or by the Cisco codename Palo. The VIC is a dual-port 10 Gigabit Ethernet mezzanine card that provides up to 128* virtual adapters which can be configured in any combination of Ethernet network interface cards (NICs) or Fibre Channel Host Bus Adapters (HBAs). The M81KR should be the mezzanine of choice as it is the most flexible of all the options.

    Cisco M81KR VIC Mezzanine Card

    M71KR also referred to as the Compatibility card or by the Cisco codename Menlo. This option should be selected if for any reason the Cisco chipset on the M81KR is not supported by a 3rd party vendor. The M71KR has 2 x 10 Gigabit Ethernet connections to the mid-plane and presents 2 x Intel 10GE NICs and either 2 Emulex or Q-Logic 4Gb HBAs to the blade.

    Cisco M71KR Menlo Mezzanine Card

    * While the M81KR VIC can support 128 virtual adapters the real usable limit is 58 due to a limit with the amount of virtual interfaces (VN-Tags) that are supported. There are 15 virtual interfaces supported per uplink from the Chassis to the Fabric interconnect so using 4 x uplinks gived 60 virtual interfaces less 2 for management which

    gives 58. Dont get get too concerned about the numbers though as when was the last time you saw a server with a mix of 58 NICS and HBAs.

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 9 of 30

    82598KR also referred to by the Cisco codename Oplin. The Oplin provides 2 x Intel 10 Gigabit Ethernet NICs from the blade to the mid-plane

    but has no Fibre Channel HBA functionality. The Oplin adapter is based on the Intel 82598 10 Gigabit Ethernet controller, which is designed for efficient high-performance Ethernet transport. This is obviously for blades that have no fibre channel connectivity requirements. The benefit of this card is that it is a cheaper option than the previous two but is limited to Ethernet only connectivity.

    ,

    Cisco 82598KR Oplin Mezzanine Card

    2) I/O Modules also referred to as 2104XP Fabric Extenders (FEX)

    The I/O modules fit into the rear of the chassis as shown below. Whilst the UCS can work with one I/O module if redundancy is required (which it always should be) and in order to provide two active data paths then you will need two.

    These I/O modules connect internally to each blade and each provides 4 x 10 Gigabit Ethernet ports for external uplinks to the Fabric Interconnects. Uplinks of 1, 2 and 4 cables are supported.

    The figure above shows the two I/O modules

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 10 of 30

    3) Cisco UCS 6100 series Fabric Interconnects.

    The fabric interconnects are the brains of the UCS, they also act as the central hub for the UCS allowing up to 40 chassis (up to 320 servers) to be connected together in to a single management domain. Although the current support maximum with UCSM version 1.4(1) is 20 Chassis (160 Servers) this will however increase with further updates.

    The Cisco UCS 6100 Series offers line-rate, low-latency, lossless 10 Gigabit Ethernet and Fibre Channel over Ethernet (FCoE) functions.

    The fabric interconnects also contain the UCS manager (UCSM) software which is the singular web portal by which the entire system is managed.

    The Fabric interconnects are available in two options a 20 port 1U height model (6120) and a 40 port 2U height model (6140), they also have expansion module options for additional 10 Gigabit Ethernet or fibre channel (FC) ports. As can be seen from the topology diagram the fabric interconnects also provide the connectivity into the existing LAN and the SAN. Again for redundancy two fabric interconnects should be included in any UCS design. It is important to know that while the management functionality of the fabric interconnects runs in Active/Standby mode the data paths of both fabric interconnects run in Active/Active.

    The fabric interconnects are capable of running in either end host mode where the fabric interconnects appear to upstream switches as a host with multiple NICs and HBAs, or they can run in switch mode where the fabric Interconnects act more like a layer 2 switch. The default is and Cisco strongly recommend running the fabric interconnects in end host mode.

    The 6120 and 6140 Fabric Interconnects come pre licensed with 8 and 16 FCoE ports respectively additional port licences are purchased as required.

    The above graphic shows the 6120 (top) and the 6140 (bottom)

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 11 of 30

    The other components shown in the numbered diagram (Figure 2) are not part of the Cisco UCS but are never the less explained below.

    4) Fibre channel switches.

    Fibre channel switches also referred to as fabric switches or fabrics.

    These are the fibre channel switches that connect servers to their fibre channel storage. SAN design best practice specifies there should always be two separate unconnected SANs generally referred to as SAN A and SAN B. These are two completely separate infrastructures providing redundant paths from the servers to the Storage.

    In Cisco terms these would most likely be Cisco MDS switches (Multilayer Director Switch) but could be Cisco Nexus. Other common vendors of SAN switches include Brocade and Hewlett Packard.

    Note

    Since UCSM 1.4 with the introduction of appliance ports it is now possible to directly attach NAS and SAN storage devices directly to the Cisco 6100 fabric interconnects. However full SAN functionality and features like zoning are not available on the fabric interconnects. If you think you have a requirement that requires direct storage connection to the Cisco Fabric Interconnect then please contact the author to validate your requirement.

    5) Ethernet switches.

    This switch represents the Ethernet network. The 6100 fabric interconnects would generally uplink in to a redundant core switch or dual uplink into a pair of core switches the Cisco Nexus 7000 for example. These uplinks will be 10 Gigabit Ethernet, although it is possible to throttle down a number of ports on the fabric interconnects to Gigabit Ethernet to provide support for core switches which do not have 10 Gigabit connectivity.

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 12 of 30

    6) Storage Area Network (SAN)

    This is the storage device(s) which contain the physical disks and Fibre Channel interfaces.

    Common fibre channel storage offerings are:

    EMC Symmetrix DMX/VMAX

    IBM XIV

    Hitachi Data Systems (HDS) VSP

    In contrast to fibre channel storage a common alternative is Network Attached Storage (NAS) which provides network shares, just like a file server and uses Ethernet cabling and as such would be connected to the Ethernet data network.

    Common NAS offerings are:

    NetApp FAS product line

    EMC Celerra

    Well that about covers the components that make up the Cisco Unified Computing System and how Cisco UCS would integrate into an existing Ethernet and fibre channel networks.

    The Topology diagram below shows a how all these components tie in with each other.

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 13 of 30

    5 Topology Diagram

    The below diagram shows all the components of the Cisco UCS system, along with how UCS ties in with the existing Ethernet Local Area Network (LAN) and fibre Storage Area Network (SAN). Dont worry if the below diagram looks a bit daunting all shall be explained!

    Cisco UCS

    Components

    Figure 2 Cisco UCS Components

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 14 of 30

    6 The 5 Karate Moves of Cisco UCS

    In the same way that the Karate Kid only knew 5 Karate moves and was then immediately able to beat black belts and win a whole tournament, once you know the 5 Moves below you will know all the key UCS features and how to impart the value of the Cisco UCS proposition.

    In summary the 5 Karate Moves of UCS are:

    1. Management

    2. Fibre Channel over Ethernet

    3. Extended Memory Technology

    4. Virtualisation

    5. Statelessness

    These 5 moves are described in detail below.

    1) Management

    Cisco UCS uses a single IP address to manage up to 320 (160 today) servers including all of the network and storage elements of those servers. This also means a single management pane of glass via a single GUI (UCSM) 2) Unified Fabric (Fibre Channel over Ethernet (FCoE) This basically just means wrapping a Fibre channel (Storage) frame in an Ethernet (Networking) frame and transmitting it over same cabling. This has many advantages, which are detailed below. Historically Fibre Channel speeds have always been higher than those of Ethernet i.e. 1,2,4 and 8Gb/s, However 10Gb Ethernet is now available and the 40GB and 100GB Ethernet standards now ratified and as such speed is no longer the limiting factor. Next to make use of a lossy Ethernet network to transmit lossless storage data was a challenge that needed to be overcome. Think of an Ethernet network as a conveyor belt with a workman loading boxes on one end, he is unaware whether he is loading boxes too fast for the second workman at the other end to take them off, however if boxes start dropping of the end of the conveyor belt the second workman can shout for him to slow down. The first workman will then half his loading rate and gradually speed up until boxes once more fall of the end and his colleague shouts to him to

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 15 of 30

    slow down again. These dropped boxes while tolerable in an Ethernet network are totally unacceptable in a fibre channel network. In contrast think of a Fibre Channel network not as a conveyor belt but a line of workman who pass the boxes down the line from one to the other. The first workman cannot pass his box until the workman standing next to him has passed his box and thus has empty hands (or in Fibre Channel terms a buffer credit) This way there should never be a dropped box. So the challenge was how to transmit both types of boxes over the same network. As we have seen many times before when Cisco dont like the rules, what do they do? Thats right, they change them. Cisco accomplished this by enhancing the rules of Ethernet with the inception of Cisco Data Centre Ethernet (DCE) submitted to the IEEE for standardisation who thought the name Data Centre Bridging (DCB) sounded better. Other vendors have since followed suit and use terms as Enhanced Converged Ethernet, but all should adhere to the IEEE DCB standard. By unifying Ethernet and Fibre Channel on the same medium a whole new raft of consolidation and cost savings opens up. Consider the below example. Example: Take a VMware ESX host that requires 8 NICs and two HBAs making 10 physical

    connections in total. This same server when deployed on a Cisco UCS blade now has only 2! This equates to an 80% cabling reduction. On top of which adding additional NICS and HBAs is now a simple matter of mouse clicks, leading to what has been termed a Wire once deployment. This significant cabling reduction leads to several other benefits including: Less server switch ports required, which means less switches, more efficient cooling, and less power used (Gigabit Ethernet Cat 6 cabling consumes up to 8watts of power per end, FCoE twin-ax cabling uses 0.1 watt per end) It is also a fact that many Enterprises are forced into buying high end multiprocessor servers purely because they need the higher numbers of PCIe slots that the larger servers provide, when the processor and memory of a smaller 1U server may well have been sufficient. This is just another one of the many current data centre inefficiencies that are negated with Cisco UCS.

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 16 of 30

    3) Extended Memory Technology (EMT)

    This basically means more Virtual Machine workloads per server and greater server consolidation in other words much more with much less. It is widely accepted that a CPUs optimal running efficiency is 60-70% but with the rapid evolution of CPUs and the ever increasing amount of cores and multi-threading capabilities per socket, most hosts, particularly VMware ESX hosts run out of RAM well before they reach this 60-70% sweet spot. In an Intel Nehalem (XEON 5500/5600) architecture, memory is directly associated per processor (socket). Each socket has 3 memory channels and each memory channel has access to 2 DDR3 DIMM slots this equals 6 DIMM slots per socket. Therefore a dual socket server can access a maximum of 12 DIMM slots and if using 16GB DIMMS the absolute maximum amount of RAM that can be installed is 192GB. So how do you get more RAM? Well you need to add another CPU which in fact makes the system even less efficient than before as you have added another huge amount of CPU cycles, not to mention possibly significantly increasing license costs. VMware vSphere for example is licensed per socket so getting quad socket RAM on a dual socket board can half VMware vSphere Licence costs. Which is real terms can in its self provide a significant cost saving. Enter EMT which allows a dual socket host access to a massive 384GB of RAM. How does Cisco manage this? Well as mentioned in chapter 3 Cisco UCS is a ground up development and as such Cisco by partnering with the likes of Intel and VMware could address a lot of these limitations and provide several optimisations. And as mentioned in Move 2 when the playing field is too even and Cisco doesnt like the rules, they change them. Cisco realised that the maximum single DIMM that a BIOS could logically address is 32GB. and while at the time of writing (Q4 2010) 32GB DIMMs are still not readily commercially available, by developing the Catalina ASIC and placing it between the CPU and memory channels it was possible to in effect RAID 4 x 8GB physical DIMMs into 1 x 32GB logical DIMM. Thereby able to present 6 x 32GB logical DIMMs (192GB) to each socket this physically equates to 24 x 8GB DIMMs per socket on the system board making 48 DIMM slots on a dual socket blade.

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 17 of 30

    The above figure shows the Cisco Catalina ASICs between the DIMMs and the CPU which presents 24 x 8GB physical DIMMS as 6 x 32GB logical DIMMs per CPU (socket) While the benefits detailed above are clear with regards to maximising memory there is another benefit to be had if there isnt a requirement to maximise the memory to 384GB per blade. For example take the requirement to equal the maximum amount of memory that can be installed in an HP blade server utilising dual Intel XEON 5500 processors: Assuming 1000 for an 16GB DDR3 DIMM and 100 for a 4GB DIMM HP Dual XEON 5500 12 DIMM Slots using 16GB DIMMS = 192MB @ 800MHz = 12000 Cisco UCS B250 48 DIMM Slots using 4GB DIMMS = 192MB @ 1066MHz = 4800 As can be seen there are significant cost savings to be had by using a large number of low capacity DIMMs. This is only made possible by having the memory real estate of 48 DIMM slots available with Cisco Extended Memory Technology compared to only 12 in a comparable HP server.

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 18 of 30

    4) Virtualisation

    Cisco UCS was built from the ground up with virtualised environments in mind. Cisco have partnered with Intel and VMware to integrate many virtualisation optimisations at hardware level. For instance Cisco UCS virtualises I/O at the BIOS level so when guest operating systems perform PCI bus scans they see these virtualised resources as Physical devices. Cisco UCS also tightly integrates with VMware and individual virtual machines can be seen via the UCSM GUI. 5) Statelessness

    Statelessness basically means making the underlying hardware completely transparent to the operating system and applications running over it. Blades or Compute nodes have no identity. This is accomplished via service profiles, which are like software definitions for servers. By using service profiles replacing a failed blade can take a matter of minutes, all MAC addresses, World Wide Node names (WWN),Universally Unique Identifiers (UUID), Firmware and even BIOS settings are only ever associated with a service profile which can be detached and reattached to blades as required. Example. Historically if a server failed in the data centre the procedure would be to send an engineer in to investigate and if required replace the failed blade. Obviously we dont want to have to reconfigure other entities which may be linked to this servers particular MAC address, so the engineer would move the NIC cards from the failed unit to the replacement. Similarly we dont want to have to involve the SAN team in having to re-zone the storage, so the engineer would move the HBAs from the failed unit to the replacement to ensure the WWPNs remain unchanged. Also there may be software licenses tied to the Universally Unique Identifier (UUID) of the server. Taking all the above into account this server swap out could take several hours resulting in server downtime and engineer resource costs. In a Cisco UCS environment this would be a simple matter of disassociating the service profile from the failed blade and re-associating it to a standby or replacement blade and then just power up the new blade. As mentioned all MAC addresses, WWPNs, UUIDs, firmware revisions and settings are only ever associated with the service profile and as such this new blade will be an exact match of the failed unit thus preserving all identity information.

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 19 of 30

    7 Vblock

    The Virtual Computing Environment (VCE) coalition, formed jointly by Cisco and EMC with VMware, represents an unprecedented level of collaboration in development, services, and partner enablement to minimize risk during an organisation's infrastructure virtualisation journey to private cloud implementation. One of the offerings to come out of this coalition is the Vblock.

    Vblock delivers a pre-tested, pre-integrated, unified solution for delivering virtualized IT resources which are managed and supported through a single organisation, staffed by experts from all three companies.

    There seems to be a lot of confusion in the industry and our customers around what a Vblock is, in particularly how UCS relates to Vblock.

    Put simply a Vblock is a packaged offering from VCE with Cisco UCS providing the compute platform with EMC providing the storage and VMware providing the Virtualisation.

    While UCS is part of the Vblock Bill-of-Materials (BOM), it is not in any way restricted to only be sold in a VCE or Vblock configuration. While UCS does have design characteristics and integration points that optimise it for virtualised Data Centre environments, Cisco realises that customers have a variety of computing needs (physical and virtual) and vendor preferences (applications, hypervisor, storage, management, etc.) and is fully committed to delivering the value of UCS to those environments.

    There are currently three different Vblock infrastructure packages each sized to a different implementations.

    Vblock 0 (300 800+ VMs)

    - An entry level configuration addresses small data centres or organisations.

    - Test / development platform for Partners and customers.

    Vblock 1 (800 3000+ VMs)

    - A mid-sized configuration broad range of IT capabilities for organisations of all sizes.

    - Typical use case: Shared services Email, File and Print, Virtual Desktops etc..

    Vblock 2 (3000 6000+ VMs)

    - A high-end configuration extensible to meet the most demanding IT needs.

    - Typical use case: Business critical ERP, CRM systems

    Computacenter deployed the first VCE Vblock2 infrastructure package in the UK and EMEA to leverage for customer demonstrations. The solution is an integrated, tested and validated

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 20 of 30

    data centre infrastructure offering that supports a broad range of operating systems and applications that enable customers to accelerate broad-scale data centre virtualisation.

    The Vblock2 is available for customer demonstrations in the Hatfield Solutions Centre

    For more information or to arrange a demonstration of the Vblock 2 contact the below address:

    [email protected]

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 21 of 30

    8 What to look out for that could indicate there is a potential UCS opportunity?

    Customers embarking on a virtualisation project

    This is the perfect time to put the platform in place to support x86 virtualisation technologies. Customers with data centre constraint issues Customers looking to use blade based technology to reduce DC impact. Customer looking to upgrade their server infrastructure Customers who are looking to migrate workloads from UNIX platforms who require a scalable platform and high memory requirements. Many customers with UNIX workloads are looking at Linux on x86 as a lower cost way of delivering Services. Customers who are planning on building a new data centre

    This is the perfect time to position UCS, especially if the customer is bought into the Cisco DC 3.0 strategy. (http://www.cisco.com/en/US/netsol/ns340/ns394/ns224/index.html)

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 22 of 30

    9 Pre-empting Customer Concerns.

    Listed below are some concerns real or perceived that customers may have about adopting Cisco UCS.

    Q. Isnt Cisco UCS really just for green field data centre deployments? A. No. Providing up to 86% cabling reduction and reducing the provisioning time for a new

    server from weeks to minutes along with substantial operation expenditure savings. There are numerous and very tangible benefits for deploying Cisco UCS into existing estates.

    Q. Cisco UCS gives me a vast increase in Virtual Machine (VM) consolidation but this means if I lose a blade / MEZ card I could potentially lose hundreds of VMs.

    A. True, however a significant amount of unplanned outages in the Data Centre are due to

    human error* and with the vast reduction of cabling and infrastructure along with the simplified management, unplanned outages should in fact decrease.

    Also the meantime between failure of components shows that hardware failures are extremely rare, this coupled with the fact that all major components have hardware redundancy available and have extremely sensitive monitoring in place to detect component degradation in its earliest stages.

    On top of this there are several methods to provide redundancy to Virtual machines via VMware with utilities such as VMware High Availability (HA) and VMware Fault Tolerance (FT)

    *Any survey body you care to mention.

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 23 of 30

    10 Cisco UCS vs. The Competition

    With the innovative technology and strategic partnerships described in this document Cisco UCS has a real jump on the competitors, the below points compare Cisco UCS with other vendor offerings in the same space.

    While a detailed comparison with each will be a future document in its own right, the key differentiators are detailed below.

    As mentioned several times in this document Cisco UCS is a system made up of several components as such just doing a blade to blade cost comparison is not a valid exercise. The only true CAPEX cost comparison if you want to compare apples with apples is consider the system price over the system price of that of a competitor, once this comparison is made Cisco UCS with be shown to be a highly competitive option with the addition of significant OPEX savings.

    10.1 Cisco UCS Vs HP Bladesystem Matrix

    As a Sci-fi enthusiast I find it apt that HP have named there all-in-one blade system Matrix

    As just like in the Matrix movie where the Neo character was told by Morpheus that the Agents whilst incredibly fast and strong must still adhere to the laws of reality and as such could never be as fast or as strong as Neo, who had no such restriction.

    Just like the above analogy the HP Bladesystem Matrix must still adhere to several rules and standards which have already been pushed to the absolute maximum. The Cisco UCS as detailed in the 5 Karate moves section has been engineered beyond many such boundaries.

    HP has a separate product for virtual I/O called Virtual Connect. Unlike the Cisco UCS, HP's Virtual Connect is implemented at the I/O module level.

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 24 of 30

    Management is on a per chassis basis with Onboard Administrator modules. Up to 4 chassis can be connected together for consolidation of cabling and management scaling beyond four chassis, central management can be provided with Insight Dynamics VSE Suite. UCS in contrast has a single management IP address and single user interface for upto 320 servers (40 Chassis).

    Matrix optionally includes an HP EVA (Enterprise Virtual Array) for storage and HP services to tie all the pieces together. Cisco UCS uses a single redundant management platform the UCSM.

    HPs marketing lists one of the many benefits of Matrix as Matrix allows customers to get the benefits of a converged system without a rip and replace strategy for all their existing data centre investments.

    HP clearly sees that harnessing existing products without a forklift upgrade is a key differentiator over Cisco UCS and that this also protects investment clients may have made in training staff etc. However Cisco UCS should be a natural and painless transition for IT support professionals.

    Over a year has passed since both products began shipping. The overwhelming popularity, elegance, reliability and ease of deployment of Cisco UCS evidence the three years of investment Cisco made in developing an optimised virtualisation platform. The complexity and limitations with Matrix, on the other hand, indicate a rushed repackaging of existing HP products in response to the Cisco offering.

    Cisco UCS vs. HP Matrix comparison.

    Cisco UCS HP Matrix

    Enterprise scalability

    40 chasses*, 320 blades tens of thousands of VMs

    *20 Chassis supported today with

    UCSM 1.4

    250 total logical servers. Can combine up to 4 CMS to reach 1,000 logical servers, but no clustering or information sharing. Server profiles cannot be moved from one CMS to another

    Redundancy All components redundant Central Management Server has no fault tolerance or clustering and little or no redundancy.

    Memory

    96GB Half Width Blade and 384GB Full Width Blade

    (8GB DIMMs)

    With HP BL490C half-height blades : 144 GB w/8 GB DIMMs, 192 w/16 GB DIMMs

    With HP BL685c (AMD) blades: 256 GB

    "Closed" Architecture Limitations

    Cisco UCS requires Cisco servers, CNAs and Fabric Interconnects for optimal performance

    Requires one of the following specific HP ProLiant blades: HP ProLiant BL260c, HP ProLiant BL280c, HP ProLiant BL460c, HP ProLiant BL465c, HP ProLiant BL490c, HP ProLiant BL495c, HP ProLiant BL680c or HP ProLiant BL685c.

    vNIC & vHBA Support Up to 128 each with Palo Adapter (56 vNICs per half-slot server

    today)

    LAN Ethernet 16 x 10 Gb downlinks to server ports

    SAN Fiber 16 X 8 Gb 2/4/8Gb auto negotiating server ports

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 25 of 30

    OS Support for Management Software None required

    Windows Server 2008, Enterprise Edition 32 bit

    Windows Server 2003, Enterprise Edition R2/SP2: 32 bit

    Database Support for Management Software None required

    Microsoft SQL Server 2008, Microsoft SQL Server 2005 SP2,

    Microsoft SQL Server 2005 Express Edition SP2

    Hypervisor Support

    Supports any X86-based hypervisor. Particular advantages from tight integration with VMware vSphere

    VMware ESX Server 3.5.0 Update 4

    VMware ESX Server 4.0 (pilot & test environments only)

    Windows Server 2008 Hyper-V (though not yet supported by Insight Recovery)

    Guest OS Support (server) Any

    Windows Server 2008, Datacenter Edition 32 bit and x64

    Windows Server 2008 Hyper-V, Datacenter1 x64

    Windows Server 2003, Enterprise Edition R2/SP2: 32 bit R2/SP2: x64

    Red Hat Enterprise Linux 43 Update 7: 32 bit Update 7: AMD64 and

    Intel EM64T

    Red Hat Enterprise Linux 53 Update 3: 32 bit Update 3: AMD64 and

    Intel EM64T

    SUSE Linux Enterprise Server 103 SP2: 32 bit SP2: AMD64 and Intel

    EM64T

    Guest OS Support (VDI) Any None (No Matrix automated provisioning support )

    3rd party development XML-based API None

    QOS Yes None

    Minimum cables required per chassis (inc. FC & redundancy) 2 6

    Maximum cables potentially

    needed per chassis (inc. FC & redundancy)

    8 34

    FCoE Yes No

    Ability to deliver native network and storage performance to VMs via

    hypervisor bypass

    Yes No

    Network traffic monitoring & application of live-migration aware network and security policies

    Cisco VN-Link / Nexus 1000V None

    Mfg. Support 1-Year 3-Year

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 26 of 30

    10.2 Cisco UCS Vs IBM BladeCentre

    IBM H class BladeCenter chassis house up to 14 blade servers and support Intel, AMD and PowerPC processors Vs Cisco UCS which supports 8 Blades with Intel CPUs. However memory density in the Cisco blades can scale to much higher per Intel CPU. IBM are at this time the only other vendor who offer a Converged Network Adapter which is compatible with the Cisco Nexus 5000, however in order to use the IBM CNAs a 10Gig pass through module is required, meaning that one external cable is required for each server, this could quickly eat up Nexus 5000 ports as a fully populated single H class chassis would require 14 switch ports. In contrast Cisco UCS has between 1 and 4 10 Gigabit Ethernet uplinks per I/O module which equates to a significant cabling reduction over the IBM solution. Management and ease of use is another big advantage that Cisco UCS has over an IBM solution. IBM require Advanced Management Modules (AMM) in the blade chassis each of which have separate IP addresses and require individual management, this does not scale as well and distributes management over many chassis and IP addresses, in addition network modules and fabric modules are again all managed independently. Cisco UCS on the other hand has a single management IP address and user interface for up to 320 servers (40 Chassis) and all network and fibre channel configuration is carried out via this single management interface. IBM do offer a free management product called Director which does provide a centralised management portal, however this does require provisioning an additional Management server.

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 27 of 30

    11 Glossary of Terms

    TERM Acronym Definition

    Unified Computing System

    UCS You should know what this means by now.

    Converged Network Adapter

    CNA Contain both Fibre Channel Host Bus Adapter (HBA) and Ethernet Network Interface Card (NIC) functionality on the same adapter card.

    Input / Output I/O Inputs are the signals or data received by the system, and outputs are the signals or data sent from it.

    Data Centre Bridging DCB Enhancements to Ethernet local area networks for use in data centre environments

    Fibre Channel over Ethernet

    FCoE Encapsulation of Fibre Channel frames over Ethernet networks

    Internet Small Computer System Interface

    iSCSI Internet Protocol (IP)-based storage networking standard for linking data storage facilities

    Network Attached Storage

    NAS File-level computer data storage connected to a computer network

    Dual Inline Memory Module

    DIMM Memory Chips on a small printed circuit board

    Central Processing Unit

    CPU The portion of a computer system that carries out the instructions of a computer program

    Host Bus Adapter HBA Connects a host system to other network and storage devices

    Multi CORE N/A Multiple processors that coexist on the same chip

    THREAD N/A Is the smallest unit of processing that can be scheduled by an operating system.

    PROCESS N/A An instance of a computer program that is being executed

    SOCKET N/A The connector on a computer's motherboard for the CPU

    Redundant Array of Independent Disks

    RAID Multiple physical drives presented to an operation system and a single logical drive often providing fault tolerance for single or multiple drive failures.

    Application Specific Integrated Circuit

    ASIC A Computer chip designed for a dedicated function

    World Wide Port Name

    WWPN A unique number assigned to a fibre channel host port

    Storage Area Network

    SAN A separate network for storage devices that appear that they are directly connected to hosts.

    Serial Attached SCSI

    SAS SAS drive utilizes the same form factor as a SATA drive but has several high performance advantages

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 28 of 30

    TERM Acronym Definition

    Capital Expenditure Capex CAPEX is the cost of the initial outlay of the product or system

    Operational Expenditure

    Opex OPEX is an ongoing cost for running a product, business, or system.

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 29 of 30

    Appendix A B Series UCS Blade Servers

    The below table details a comparison of the various UCS B Series Server Blades

    Models Comparison: Cisco UCS B-Series Blade Servers

    Model Cisco UCS B200 M1 Blade Server

    Cisco UCS B200 M2 Blade Server

    Cisco UCS B250 M1 Extended Memory Blade Server

    Cisco UCS B250 M2 Extended Memory Blade Server

    Cisco UCS B230 M1 Blade Server

    Cisco UCS B440 M1 High-Performance Blade Server

    Processor Sockets

    2 2 2 2 2 4

    Processors Supported

    Intel Xeon 5500 Series

    Intel Xeon 5600 Series

    Intel Xeon 5500 Series

    Intel Xeon 5600 Series

    Intel Xeon 6500 or 7500 Series

    Intel Xeon 7500 Series

    Memory Capacity

    12 DIMMs; up to 96 GB

    12 DIMMs; up to 192 GB

    48 DIMMs; up to 384 GB

    48 DIMMs; up to 384 GB

    32 DIMMs; up to 256 GB

    32 DIMMs; up to 256 GB

    Memory Size and Speed

    4, and 8 GB DDR3; 1066 MHz and 1333 MHz

    4, and 8, and 16 GB DDR3; 1066 MHz and 1333 MHz

    4, and 8 GB DDR3; 1066 MHz

    4, and 8 GB DDR3; 1066 MHz and 1333 MHz

    4 and 8 GB DDR3; 1066 MHz

    4 and 8 GB DDR3; 1066 MHz

    Internal Disk Drive

    2x 2.5" SFF SAS or 15mm SATA SSD

    2x 2.5" SFF SAS or 15mm SATA SSD

    2x 2.5" SFF SAS or 15mm SATA SSD

    2x 2.5" SFF SAS or 15mm SATA SSD

    2x 2.5" solid-state drives (SSD)

    4x 2.5" SFF SAS/SATA

    Integrated Raid

    0,1 0,1 0,1 0,1 0,1 0,1,5,6

    Mezzanine I/O Adapter Slots

    1 1 2 2 1 2

    I/O Throughput

    Up to 20 Gbps Up to 20 Gbps Up to 40 Gbps Up to 40 Gbps Up to 20 Gbps Up to 40 Gbps

    Form Factor Half width Half width Full width Full width Half width Full width

    Max. Servers per Chassis

    8 8 4 4 8 4

  • UCS for Dummies v0 05.docx

    2011 Computacenter (UK) Ltd. Page 30 of 30

    Appendix B C Series UCS Servers

    Comparison of Cisco UCS C-Series Rack-Mount Server Features

    Cisco UCS C200 M1

    and M2 Cisco UCS C210 M1

    and M2 Cisco UCS C250 M1

    and M2 Cisco UCS C460 M1

    Ideal for

    Production-level virtualization and mainstream data center workloads

    Economical, high-capacity, reliable, internal storage; file, storage, database, and content-delivery

    Demanding virtualization and large dataset workloads

    High-performance, enterprise-critical stand-alone applications and virtualized workloads

    Maximum memory 96 GB 96 GB 384 GB 512 GB

    Internal disk drive Up to 4 Up to 16 Up to 8 Up to 12

    Built-In RAID 0 and 1 (SATA only) 0 and 1 (5 SATA drives only)

    Optional RAID 0, 1, 5, 6, and 10 0, 1, 5, 6, 10, 50, and 60

    0, 1, 5, 6, 10, 50, and 60

    0, 1, 5, 6, 10, 50, and 60

    Integrated networking

    2X integrated Gb Ethernet; 10 Gb unified fabric optional

    2X integrated Gb Ethernet; 10 Gb unified fabric optional

    4X integrated Gb Ethernet; 10 Gb unified fabric optional

    2X Gigabit Ethernet LAN-on-motherboard (LOM) ports; 2X 10 Gigabit Ethernet ports

    I/O via PCIe Two half length x8 slots: one full height and one low profile

    Five full height x8 slots: two full length and three half length

    Three low profile, half length x8 slots; 2 full-height, half length x16 slots

    Ten PCIe slots, all full height; 4 half length slots, 6 three quarter length slots; 2 Gen 1 slots, 8 Gen 2 slots

    Multicore Processors

    Up to 2 Intel Xeon 5500 or 5600 Series

    Up to 2 Intel Xeon 5500 or 5600 Series

    Up to 2 Intel Xeon 5500 or 5600 Series

    Up to 4 Intel Xeon 7500 Series