87652141 the-new-data-center-brocade

176

Transcript of 87652141 the-new-data-center-brocade

Page 1: 87652141 the-new-data-center-brocade
Page 2: 87652141 the-new-data-center-brocade

THE NEWDATACENTER

FIRST EDITION

New technologies are radicallyreshaping the data center

TOM CLARK

Page 3: 87652141 the-new-data-center-brocade
Page 4: 87652141 the-new-data-center-brocade

Tom Clark, 1947–2010

All too infrequently we have the true privilege of knowing a friend and colleague like Tom Clark. We mourn the passing of a special

person, a man who was inspired as well as inspiring, an intelligent and articulate man, a sincere and gentle person with enjoyable

humor, and someone who was respected for his great achievements. We will always remember the endearing and rewarding experiences

with Tom and he will be greatly missed by those who knew him.

Mark S. Detrick

Page 5: 87652141 the-new-data-center-brocade

© 2010 Brocade Communications Systems, Inc. All Rights Reserved.

Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, Extraordinary Networks, MyBrocade, and VCS are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.

Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government.

Brocade Bookshelf Series designed by Josh Judd

The New Data CenterWritten by Tom ClarkReviewed by Brook ReamsEdited by Victoria ThomasDesign and Production by Victoria ThomasIllustrated by Jim Heuser, David Lehmann, and Victoria Thomas

Printing History

First Edition, August 2010

iv The New Data Center

Page 6: 87652141 the-new-data-center-brocade

Important Notice

Use of this book constitutes consent to the following conditions. This book is supplied “AS IS” for informational purposes only, without warranty of any kind, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this book at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this book may require an export license from the United States government.

Brocade Corporate HeadquartersSan Jose, CA USAT: [email protected]

Brocade European HeadquartersGeneva, SwitzerlandT: [email protected]

Brocade Asia Pacific HeadquartersSingaporeT: [email protected]

Acknowledgements

I would first of all like to thank Ron Totah, Senior Director of Marketing at Brocade and cat-herder of the Global Solutions Architects, a.k.a. Solutioneers. Ron's consistent support and encouragement for the Brocade Bookshelf projects and Brocade TechBytes Webcast series provides sustained momentum for getting technical information into the hands of our customers.

The real work of project management, copyediting, content generation, assembly, publication, and promotion is done by Victoria Thomas, Technical Marketing Manager at Brocade. Without Victoria's steadfast commitment, none of this material would see the light of day.

I would also like to thank Brook Reams, Solution Architect for Applications on the Integrated Marketing team, for reviewing my draft manuscript and providing suggestions and invaluable insights on the technologies under discussion.

Finally, a thank you to the entire Brocade team for making this a first-class company that produces first-class products for first-class customers worldwide.

The New Data Center v

Page 7: 87652141 the-new-data-center-brocade

About the Author

Tom Clark was a resident SAN evangelist for Brocade and represented Brocade in industry associations, conducted seminars and tutorials at conferences and trade shows, promoted Brocade storage networking solutions, and acted as a customer liaison. A noted author and industry advocate of storage networking technology, he was a board member of the Storage Networking Industry Association (SNIA) and former Chair of the SNIA Green Storage Initiative. Clark has published hundreds of articles and white papers on storage networking and is the author of Designing Storage Area Networks, Second Edition (Addison-Wesley 2003, IP SANs: A Guide to iSCSI, iFCP and FCIP Protocols for Storage Area Networks (Addison-Wesley 2001), Storage Virtualization: Technologies for Simplifying Data Storage and Management (Addison-Wesley 2005), and Strategies for Data Protection (Brocade Bookshelf, 2008).

Prior to joining Brocade, Clark was Director of Solutions and Technologies for McDATA Corporation and the Director of Technical Marketing for Nishan Systems, the innovator of storage over IP technology. As a liaison between marketing, engineering, and customers, he focused on customer education and defining features that ensure productive deployment of SANs. With more than 20 years experience in the IT industry, Clark held technical marketing and systems consulting positions with storage networking and other data communications companies.

Sadly, Tom Clark passed away in February 2010. Anyone who knew Tom knows that he was intelligent, quick, a voice of sanity and also sarcasm, and a pragmatist with a great heart. He was indeed the heart of Brocade TechBytes, a monthly Webcast he described as “a late night technical talk show,” which was launched in November 2008 and is still part of Brocade’s Technical Marketing program.

vi The New Data Center

Page 8: 87652141 the-new-data-center-brocade

Contents

Preface ....................................................................................................... xv

Chapter 1: Supply and Demand ..............................................................1

Chapter 2: Running Hot and Cold ...........................................................9Energy, Power, and Heat ......................................................................................9Environmental Parameters ................................................................................10Rationalizing IT Equipment Distribution ............................................................11Economizers ........................................................................................................14Monitoring the Data Center Environment .........................................................15

Chapter 3: Doing More with Less ......................................................... 17VMs Reborn .........................................................................................................17Blade Server Architecture ..................................................................................21Brocade Server Virtualization Solutions ...........................................................22

Brocade High-Performance 8 Gbps HBAs .................................................23Brocade 8 Gbps Switch and Director Ports ..............................................24Brocade Virtual Machine SAN Boot ...........................................................24Brocade N_Port ID Virtualization for Workload Optimization ..................25Configuring Single Initiator/Target Zoning ................................................26Brocade End-to-End Quality of Service ......................................................26Brocade LAN and SAN Security .................................................................27Brocade Access Gateway for Blade Frames ..............................................28The Energy-Efficient Brocade DCX Backbone Platform for Consolidation ..............................................................................................28Enhanced and Secure Client Access with Brocade LAN Solutions .........29Brocade Industry Standard SMI-S Monitoring ..........................................29Brocade Professional Services ..................................................................30

FCoE and Server Virtualization ..........................................................................31

Chapter 4: Into the Pool ........................................................................ 35Optimizing Storage Capacity Utilization in the Data Center .............................35Building on a Storage Virtualization Foundation ..............................................39Centralizing Storage Virtualization from the Fabric ..........................................41Brocade Fabric-based Storage Virtualization ...................................................43

The New Data Center vii

Page 9: 87652141 the-new-data-center-brocade

Contents

Chapter 5: Weaving a New Data Center Fabric ................................. 45Better Fewer but Better ......................................................................................46Intelligent by Design ...........................................................................................48Energy Efficient Fabrics ......................................................................................53Safeguarding Storage Data ................................................................................55Multi-protocol Data Center Fabrics ....................................................................58Fabric-based Disaster Recovery ........................................................................64

Chapter 6: The New Data Center LAN ................................................. 69A Layered Architecture .......................................................................................71Consolidating Network Tiers .............................................................................. 74Design Considerations .......................................................................................75

Consolidate to Accommodate Growth .......................................................75Network Resiliency .....................................................................................76Network Security .........................................................................................77Power, Space and Cooling Efficiency .........................................................78Network Virtualization ................................................................................79

Application Delivery Infrastructure ....................................................................80

Chapter 7: Orchestration ....................................................................... 83

Chapter 8: Brocade Solutions Optimized for Server Virtualization . 89Server Adapters ..................................................................................................89

Brocade 825/815 FC HBA .........................................................................90Brocade 425/415 FC HBA .........................................................................91Brocade FCoE CNAs ....................................................................................91

Brocade 8000 Switch and FCOE10-24 Blade ..................................................92Access Gateway ..................................................................................................93Brocade Management Pack ..............................................................................94Brocade ServerIron ADX .....................................................................................95

Chapter 9: Brocade SAN Solutions ...................................................... 97Brocade DCX Backbones (Core) ........................................................................98Brocade 8 Gbps SAN Switches (Edge) ........................................................... 100

Brocade 5300 Switch ...............................................................................101Brocade 5100 Switch .............................................................................. 102Brocade 300 Switch ................................................................................ 103Brocade VA-40FC Switch ......................................................................... 104

Brocade Encryption Switch and FS8-18 Encryption Blade ........................... 105Brocade 7800 Extension Switch and FX8-24 Extension Blade .................... 106Brocade Optical Transceiver Modules .............................................................107Brocade Data Center Fabric Manager ............................................................ 108

Chapter 10: Brocade LAN Network Solutions ..................................109Core and Aggregation ...................................................................................... 110

Brocade NetIron MLX Series ................................................................... 110Brocade BigIron RX Series ...................................................................... 111

viii The New Data Center

Page 10: 87652141 the-new-data-center-brocade

Contents

Access .............................................................................................................. 112Brocade TurboIron 24X Switch ................................................................ 112Brocade FastIron CX Series ..................................................................... 113Brocade NetIron CES 2000 Series ......................................................... 113Brocade FastIron Edge X Series ............................................................. 114

Brocade IronView Network Manager .............................................................. 115Brocade Mobility .............................................................................................. 116

Chapter 11: Brocade One ....................................................................117Evolution not Revolution ..................................................................................117Industry's First Converged Data Center Fabric .............................................. 119

Ethernet Fabric ........................................................................................ 120Distributed Intelligence ........................................................................... 120Logical Chassis ........................................................................................ 121Dynamic Services .................................................................................... 121

The VCS Architecture ....................................................................................... 122

Appendix A: “Best Practices for Energy Efficient Storage Operations” .............................................................................................123Introduction ...................................................................................................... 123Some Fundamental Considerations ............................................................... 124Shades of Green .............................................................................................. 125

Best Practice #1: Manage Your Data ..................................................... 126Best Practice #2: Select the Appropriate Storage RAID Level .............. 128Best Practice #3: Leverage Storage Virtualization ................................ 129Best Practice #4: Use Data Compression .............................................. 130Best Practice #5: Incorporate Data Deduplication ................................131Best Practice #6: File Deduplication .......................................................131Best Practice #7: Thin Provisioning of Storage to Servers .................... 132Best Practice #8: Leverage Resizeable Volumes .................................. 132Best Practice #9: Writeable Snapshots ................................................. 132Best Practice #10: Deploy Tiered Storage ............................................. 133Best Practice #11: Solid State Storage .................................................. 133Best Practice #12: MAID and Slow-Spin Disk Technology .................... 133Best Practice #13: Tape Subsystems ..................................................... 134Best Practice #14: Fabric Design ........................................................... 134Best Practice #15 - File System Virtualization ....................................... 134Best Practice #16: Server, Fabric and Storage Virtualization .............. 135Best Practice #17: Flywheel UPS Technology ........................................ 135Best Practice #18: Data Center Air Conditioning Improvements ......... 136Best Practice #19: Increased Data Center temperatures .................... 136Best Practice #20: Work with Your Regional Utilities .............................137

What the SNIA is Doing About Data Center Energy Usage .............................137About the SNIA ................................................................................................. 138

Appendix B: Online Sources .................................................................139

Glossary ..................................................................................................141

Index ........................................................................................................153

The New Data Center ix

Page 11: 87652141 the-new-data-center-brocade

Contents

x The New Data Center

Page 12: 87652141 the-new-data-center-brocade

The New Data Center

Figures

Figure 1. The ANSI/TIA-942 standard functional area connectivity. ................3

Figure 2. The support infrastructure adds substantial cost and energy over-head to the data center. ......................................................................................4

Figure 3. Hot aisle/cold aisle equipment floor plan. .......................................11

Figure 4. Variable speed fans enable more efficient distribution of cooling. 12

Figure 5. The concept of work cell incorporates both equipment power draw and requisite cooling. .........................................................................................13

Figure 6. An economizer uses the lower ambient temperature of outside air to provide cooling. ...................................................................................................14

Figure 7. A native or Type 1 hypervisor. ...........................................................18

Figure 8. A hosted or Type 2 hypervisor. ..........................................................19

Figure 9. A blade server architecture centralizes shared resources while reduc-ing individual blade server elements. ...............................................................21

Figure 10. The Brocade 825 8 Gbps HBA supports N_Port Trunking for an ag-gregate 16 Gbps bandwidth and 1000 IOPS. ..................................................23

Figure 11. SAN boot centralizes management of boot images and facilitates migration of virtual machines between hosts. .................................................25

Figure 12. Brocade's QoS enforces traffic prioritization from the server HBA to the storage port across the fabric. ....................................................................26

Figure 13. Brocade SecureIron switches provide firewall traffic management and LAN security for client access to virtual server clusters. ..........................27

Figure 14. The Brocade Encryption Switch provides high-performance data en-cryption to safeguard data written to disk or tape. ..........................................27

Figure 15. Brocade BigIron RX platforms offer high-performance Layer 2/3 switching in three compact, energy-efficient form factors. .............................29

Figure 16. FCoE simplifies the server cable plant by reducing the number of network interfaces required for client, peer-to-peer, and storage access. ....31

Figure 17. An FCoE top-of-rack solution provides both DCB and Fibre Channel ports and provides protocol conversion to the data center SAN. ...................32

xi

Page 13: 87652141 the-new-data-center-brocade

Figures

Figure 18. Brocade 1010 and 1020 CNAs and the Brocade 8000 Switch facil-itate a compact, high-performance FCoE deployment. ....................................33

Figure 19. Conventional storage configurations often result in over- and under-utilization of storage capacity across multiple storage arrays. .......................36

Figure 20. Storage virtualization aggregates the total storage capacity of mul-tiple physical arrays into a single virtual pool. ..................................................37

Figure 21. The virtualization abstraction layer provides virtual targets to real hosts and virtual hosts to real targets. .............................................................38

Figure 22. Leveraging classes of storage to align data storage to the business value of data over time. .....................................................................................40

Figure 23. FAIS splits the control and data paths for more efficient execution of metadata mapping between virtual storage and servers. ..........................42

Figure 24. The Brocade FA4-18 Application Blade provides line-speed metada-ta map execution for non-disruptive storage pooling, mirroring and data migra-tion. ......................................................................................................................43

Figure 25. A storage-centric core/edge topology provides flexibility in deploying servers and storage assets while accommodating growth over time. ............47

Figure 26. Brocade QoS gives preferential treatment to high-value applications through the fabric to ensure reliable delivery. ..................................................49

Figure 27. Ingress rate limiting enables the fabric to alleviate potential conges-tion by throttling the transmission rate of the offending initiator. ..................50

Figure 28. Preferred paths are established through traffic isolation zones, which enforce separation of traffic through the fabric based on designated applications. ........................................................................................................51

Figure 29. By monitoring traffic activity on each port, Top Talkers can identify which applications would most benefit from Adaptive Networking services. 52

Figure 30. Brocade DCX power consumption at full speed on an 8 Gbps port compared to the competition. ...........................................................................54

Figure 31. The Brocade Encryption Switch provides secure encryption for disk or tape. ................................................................................................................56

Figure 32. Using fabric ACLs to secure switch and device connectivity. .......58

Figure 33. Integrating formerly standalone mid-tier servers into the data center fabric with an iSCSI blade in the Brocade DCX. ...............................................61

Figure 34. Using Virtual Fabrics to isolate applications and minimize fabric-wide disruptions. ................................................................................................62

Figure 35. IR facilitates resource sharing between physically independent SANs. ...................................................................................................................64

Figure 36. Long-distance connectivity options using Brocade devices. ........67

Figure 37. Access, aggregation, and core layers in the data center network. ...............................................................................................................71

Figure 38. Access layer switch placement is determined by availability, port density, and cable strategy. ...............................................................................73

xii The New Data Center

Page 14: 87652141 the-new-data-center-brocade

Figures

Figure 39. A Brocade BigIron RX Series switch consolidates connectivity in a more energy efficient footprint. .........................................................................75

Figure 40. Network infrastructure typically contributes only 10% to 15% of total data center IT equipment power usage. ...........................................................79

Figure 41. Application congestion (traffic shown as a dashed line) on a Web-based enterprise application infrastructure. ....................................................80

Figure 42. Application workload balancing, protocol processing offload and se-curity via the Brocade ServerIron ADX. .............................................................81

Figure 43. Open systems-based orchestration between virtualization domains. ..............................................................................................................84

Figure 44. Brocade Management Pack for Microsoft Service Center Virtual Machine Manager leverages APIs between the SAN and SCVMM to trigger VM migration. ............................................................................................................86

Figure 45. Brocade 825 FC 8 Gbps HBA (dual ports shown). ........................90

Figure 46. Brocade 415 FC 4 Gbps HBA (single port shown). .......................91

Figure 47. Brocade 1020 (dual ports) 10 Gbps Fibre Channel over Ethernet-to-PCIe CNA. ............................................................................................................92

Figure 48. Brocade 8000 Switch. ....................................................................92

Figure 49. Brocade FCOE10-24 Blade. ............................................................93

Figure 50. SAN Call Home events displayed in the Microsoft System Center Operations Center interface. .............................................................................94

Figure 51. Brocade ServerIron ADX 1000. ......................................................95

Figure 52. Brocade DCX (left) and DCX-4S (right) Backbone. ........................98

Figure 53. Brocade 5300 Switch. ................................................................. 101

Figure 54. Brocade 5100 Switch. ................................................................. 102

Figure 55. Brocade 300 Switch. .................................................................... 103

Figure 56. Brocade VA-40FC Switch. ............................................................ 104

Figure 57. Brocade Encryption Switch. ......................................................... 105

Figure 58. Brocade FS8-18 Encryption Blade. ............................................. 105

Figure 59. Brocade 7800 Extension Switch. ................................................ 106

Figure 60. Brocade FX8-24 Extension Blade. ............................................... 107

Figure 61. Brocade DCFM main window showing the topology view. ......... 108

Figure 62. Brocade NetIron MLX-4. ............................................................... 110

Figure 63. Brocade BigIron RX-16. ................................................................ 111

Figure 64. Brocade TurboIron 24X Switch. ................................................... 112

Figure 65. Brocade FastIron CX-624S-HPOE Switch. ................................... 113

Figure 66. Brocade NetIron CES 2000 switches, 24- and 48-port configura-tions in both Hybrid Fiber (HF) and RJ45 versions. ....................................... 114

Figure 67. Brocade FastIron Edge X 624. ..................................................... 114

The New Data Center xiii

Page 15: 87652141 the-new-data-center-brocade

Figures

Figure 68. Brocade INM Dashboard (top) and Backup Configuration Manager (bottom). ........................................................................................................... 115

Figure 69. The pillars of Brocade VCS (detailed in the next section). ......... 118

Figure 70. A Brocade VCS reference network architecture. ........................ 122

xiv The New Data Center

Page 16: 87652141 the-new-data-center-brocade

The New Data Center

Preface

Data center administrators today are facing unprecedented chal-lenges. Business applications are shifting from conventional client/server relationships to Web-based applications, data center realestate is at a premium, energy costs continue to escalate, new regula-tions are imposing more rigorous requirements for data protection andsecurity, and tighter corporate budgets are making it difficult toaccommodate client demands for more applications and data storage.Since all major enterprises run their businesses on the basis of digitalinformation, the consequences of inadequate processing power, stor-age, network accessibility, or data availability can have a profoundimpact on the viability of the enterprise itself.

At the same time, new technologies that promise to alleviate some ofthese issues require both capital expenditures and a sharp learningcurve to successfully integrate new solutions that can increase produc-tivity and lower ongoing operational costs. The ability to quickly adaptnew technologies to new problems is essential for creating a more flex-ible data center strategy that can meet both current and futurerequirements. This effort necessitates cooperation between both datacenter administrators and vendors and between the multiple vendorsresponsible for providing the elements that compose a comprehensivedata center solution.

The much overused term “ecosystem” is nonetheless an accuratedescription of the interdependencies of technologies required fortwenty-first century data center operation. No single vendor manufac-tures the full spectrum of hardware and software elements required todrive data center IT processing. This is especially true when each ofthe three major domains of IT operations -server, storage, and net-working-are each undergoing profound technical evolution in the formof virtualization. Not only must products be designed and tested for

xv

Page 17: 87652141 the-new-data-center-brocade

standards compliance and multi-vendor operability, but managementbetween the domains must be orchestrated to ensure stable opera-tions and coordination of tasks.

Brocade has a long and proven track record in data center networkinnovation and collaboration with partners to create new solutions tosolve real problems and at the same time reducing deployment andoperational costs. This book provides an overview of the new technolo-gies that are radically transforming the data center into a more cost-effective corporate asset and the specific Brocade products that canhelp you achieve this goal.

The book is organized as follows:

• “Chapter 1: Supply and Demand” starting on page 1 examines thetechnological and business drivers that are forcing changes in theconventional data center paradigm. Due to increased businessdemands (even in difficult economic times), data centers are run-ning out of space and power and this in turn is driving newinitiatives for server, storage and network consolidation.

• “Chapter 2: Running Hot and Cold” starting on page 9 looks atdata center power and cooling issues that threaten productivityand operational budgets. New technologies such as wet and dry-side economizers, hot aisle/cold aisle rack deployment, andproper sizing of the cooling plant can help maximize productiveuse of existing real estate and reduce energy overhead.

• “Chapter 3: Doing More with Less” starting on page 17 providesan overview of server virtualization and blade server technology.Server virtualization, in particular, is moving from secondary to pri-mary applications and requires coordination with upstreamnetworking and downstream storage for successful implementa-tion. Brocade has developed a suite of new technologies toleverage the benefits of server virtualization and coordinate oper-ation between virtual machine managers and the LAN and SANnetworks.

• “Chapter 4: Into the Pool” starting on page 35 reviews the poten-tial benefits of storage virtualization for maximizing utilization ofstorage assets and automating life cycle management.

xvi The New Data Center

Page 18: 87652141 the-new-data-center-brocade

• “Chapter 5: Weaving a New Data Center Fabric” starting onpage 45 examines the recent developments in storage networkingtechnology, including higher bandwidth, fabric virtualization,enhanced security, and SAN extension. Brocade continues to pio-neer more productive solutions for SANs and is the author or co-author of the significant standards underlying these newtechnologies.

• “Chapter 6: The New Data Center LAN” starting on page 69 high-lights the new challenges that virtualization and Web-basedapplications present to the data communications network. Prod-ucts like the Brocade ServerIron ADX Series of application deliverycontroller provide more intelligence in the network to offloadserver protocol processing and provide much higher levels of avail-ability and security.

• “Chapter 7: Orchestration” starting on page 83 focuses on theimportance of standards-based coordination between server, stor-age and network domains so that management frameworks canprovide a comprehensive view of the entire infrastructure and pro-actively address potential bottlenecks.

• Chapters 8, 9, and 10 provide brief descriptions of Brocade prod-ucts and technologies that have been developed to solve datacenter problems.

• “Chapter 11: Brocade One” starting on page 117 described a newBrocade direction and innovative technologies to simplify the com-plexity of virtualized data centers.

• “Appendix A: “Best Practices for Energy Efficient Storage Opera-tions”” starting on page 123 is a reprint of an article written byTom Clark and Dr. Alan Yoder, NetApp, for the SNIA Green StorageInitiative (GSI).

• “Appendix B: Online Sources” starting on page 139 is a list ofonline resources.

• The “Glossary” starting on page 141 is a list of data center net-work terms and definitions.

The New Data Center xvii

Page 19: 87652141 the-new-data-center-brocade

xviii The New Data Center

Page 20: 87652141 the-new-data-center-brocade

The New Data Center

1

Supply and DemandThe collapse of the old data center paradigm

As in other social and economic sectors, information technology hasrecently found itself in the awkward position of having lived beyond itsmeans. The seemingly endless supply of affordable real estate, elec-tricity, data processing equipment, and technical personnel enabledcompanies to build large data centers to house their mainframe andopen systems infrastructures and to support the diversity of businessapplications typical of modern enterprises. In the new millennium,however, real estate has become prohibitively expensive, the cost ofenergy has skyrocketed, utilities are often incapable of increasing sup-ply to existing facilities, data processing technology has become morecomplex, and the pool of technical talent to support new technologiesis shrinking.

At the same time, the increasing dependence of companies and insti-tutions on electronic information and communications has resulted ina geometric increase in the amount of data that must be managedand stored. Since 2000, the amount of corporate data generatedworldwide has grown from 5 exabytes (5 billion gigabytes) to over 300exabytes, with projections of about 1 zetabyte (1000 exabytes) by2010. This data must be stored somewhere. The installation of moreservers and disk arrays to accommodate data growth is simply not sus-tainable as data centers run out of floor space, cooling capacity, andenergy to feed additional hardware. The demands constantly placedon IT administrators to expand support for new applications and dataare now in direct conflict with the supply of data center space andpower.

Gartner predicted that by 2009, half of the world's data centers willnot have sufficient power to support their applications. An EmersonPower survey projects that 96% of all data centers will not have suffi-cient power by 2011.

1

Page 21: 87652141 the-new-data-center-brocade

Chapter 1: Supply and Demand

The conventional approach to data center design and operations hasendured beyond its usefulness primarily due to a departmental siloeffect common to many business operations. A data center adminis-trator, for example, could specify the near-term requirements for powerdistribution for IT equipment but because the utility bill was often paidfor by the company's facilities management, the administrator wouldbe unaware of continually increasing utility costs. Likewise, individualbusiness units might deploy new rich content applications resulting ina sudden spike in storage requirements and additional load placed onthe messaging network, with no proactive notification of the data cen-ter and network operators.

In addition, the technical evolution of data center design, cooling tech-nology, and power distribution has lagged far behind the rapiddevelopment of server platforms, networks, storage technology, andapplications. Twenty-first century technology now resides in twentiethcentury facilities that are proving too inflexible to meet the needs ofthe new data processing paradigm. Consequently, many IT managersare looking for ways to align the data center infrastructure to the newrealities of space, power, and budget constraints.

Although data centers have existed for over 50 years, guidelines fordata center design were not codified into standards until 2005. TheANSI/TIA-942 Telecommunications Infrastructure Standard for DataCenters focuses primarily on cable plant design but also includespower distribution, cooling, and facilities layout. TIA-942 defines fourbasic tiers for data center classification, characterized chiefly by thedegree of availability each provides:

• Tier 1. Basic data center with no redundancy

• Tier 2. Redundant components but single distribution path

• Tier 3. Concurrently maintainable with multiple distribution pathsand one active

• Tier 4. Fault tolerant with multiple active distribution paths

A Tier 4 data center is obviously the most expensive to build and main-tain but fault tolerance is now essential for most data centerimplementations. Loss of data access is loss of business and few com-panies can afford to risk unplanned outages that disrupt customersand revenue streams. A “five-nines” (99.999%) availability that allowsfor only 5.26 minutes of data center downtime annually requiresredundant electrical, UPS, mechanical, and generator systems. Dupli-cation of power and cooling sources, cabling, network ports, andstorage, however, both doubles the cost of the data center infrastruc-

2 The New Data Center

Page 22: 87652141 the-new-data-center-brocade

ture and the recurring monthly cost of energy. Without new means toreduce the amount of space, cooling, and power while maintaininghigh data availability, the classic data center architecture is notsustainable.

Figure 1. The ANSI/TIA-942 standard functional area connectivity.

As shown in Figure 1, the TIA-942 standard defines the main func-tional areas and interconnecting cable plant for the data center.Horizontal distribution is typically subfloor for older raised-floor datacenters or ceiling rack drop for newer facilities. The definition of pri-mary functional areas is meant to rationalize the cable plant andequipment placement so that space is used more efficiently and ongo-ing maintenance and troubleshooting can be minimized. As part of themainframe legacy, many older data centers are victims of indiscrimi-nant cable runs, often strung reactively in response to an immediateneed. The subfloors of older data centers can be clogged with aban-doned bus and tag cables, which are simply too long and too tangledto remove. This impedes airflow and makes it difficult to accommo-date new cable requirements.

Note that the overview in Figure 1 does not depict the additional datacenter infrastructure required for UPS systems (primarily batteryrooms), cooling plant, humidifiers, backup generators, fire suppres-sion equipment, and other facilities support systems. Although thesupport infrastructure represents a significant part of the data centerinvestment, it is often over-provisioned for the actual operationalpower and cooling requirements of IT equipment. Even though it may

OfficesOperations Center

Support

Telecom roomOffice & OperationsCenter LAN Switches

Entrance RoomCarrier Equipmentand Demarcations

COMPUTER ROOM

HorizontalDistribution Area

LAN/SAN/KVM Switches

MainDistribution Area

Routers, backbone LAN/SAN/KVM SwitchesPBX, M13 Muxes

HorizontalDistribution Area

LAN/SAN/KVM Switches

EquipmentDistribution Area

Rack / Cabinets

EquipmentDistribution Area

Rack / Cabinets

HorizontalDistribution Area

LAN/SAN/KVM Switches

EquipmentDistribution Area

Rack / Cabinets

ZoneDistribution Area

Horizontalcabling

Horizontal cabling

Backbonecabling

Backbonecabling

Backbone cabling

CarriersCarriers

The New Data Center 3

Page 23: 87652141 the-new-data-center-brocade

Chapter 1: Supply and Demand

be done in anticipation of future growth, over-provisioning is now a lux-ury that few data centers can afford. Properly sizing the computerroom air conditioning (CRAC) to the proven cooling requirement is oneof the first steps in getting data center power costs under control.

Figure 2. The support infrastructure adds substantial cost and energy overhead to the data center.

The diagram in Figure 2 shows the basic functional areas for IT pro-cessing supplemented by the key data center support systemsrequired for high availability data access. Each unit of powered equip-ment has a multiplier effect on total energy draw. First, each datacenter element consumes electricity according to its specific loadrequirements, typically on a 7x24 basis. Second, each unit dissipatesheat as a natural by-product of its operation, and heat removal andcooling requires additional energy draw in the form of the computerroom air conditioning system. The CRAC system itself generates heat,which also requires cooling. Depending on the design, the CRAC sys-tem may require auxiliary equipment such as cooling towers, pumps,and so on, which draw additional power. Because electronic equip-ment is sensitive to ambient humidity, each element also places anadditional load on the humidity control system. And finally, each ele-

OfficesOperations Center

Support

Telecom roomOffice & OperationsCenter LAN Switches

Entrance RoomCarrier Equipmentand Demarcations

COMPUTER ROOM

HorizontalDistribution Area

LAN/SAN/KVM Switches

MainDistribution Area

Routers, backbone LAN/SAN/KVM SwitchesPBX, M13 Muxes

HorizontalDistribution Area

LAN/SAN/KVM Switches

EquipmentDistribution Area

Rack / Cabinets

Power Distribution

EquipmentDistribution Area

Rack / Cabinets

HorizontalDistribution Area

LAN/SAN/KVM Switches

EquipmentDistribution Area

Rack / Cabinets

Fire SuppressionSystem

Computer RoomAir Conditioners (CRAC)

CRACConduits

UPS

BatteryRoom

BackupGenerators

DieselFuel

Reserves

CoolingTowers

ZoneDistribution Area

Horizontalcabling

Horizontal cabling

Backbonecabling

Backbonecabling

Backbone cabling

CarriersCarriers

4 The New Data Center

Page 24: 87652141 the-new-data-center-brocade

ment requires UPS support for continuous operation in the event of apower failure. Even in standby mode, the UPS draws power for monitor-ing controls, charging batteries, and fly-wheel operation.

Air conditioning and air flow systems typically represent about 37% ofa data center's power bill. Although these systems are essential for IToperations, they are often over-provisioned in older data centers andthe original air flow strategy may not work efficiently for rack-mountopen systems infrastructure. For an operational data center, however,retrofitting or redesigning air conditioning and flow during productionmay not be feasible.

For large data centers in particular, the steady accumulation of moreservers, network infrastructure, and storage elements and theiraccompanying impact on space, cooling, and energy capabilities high-lights the shortcomings of conventional data center design. Additionalspace simply may not be available, the air flow inadequate for suffi-cient cooling, and utility-supplied power already at their maximum. Andyet the escalating requirements for more applications, more data stor-age, faster performance, and higher availability continue unabated.Resolving this contradiction between supply and demand requiresmuch closer attention to both the IT infrastructure and the data centerarchitecture as elements of a common ecosystem.

As long as energy was relatively inexpensive, companies tended tosimply buy additional floor space and cooling to deal with increasing ITprocessing demands. Little attention was paid to the efficiency of elec-trical distribution systems or the IT equipment they serviced. Withenergy now at a premium, maximizing utilization of available power byincreasing energy efficiency is essential.

Industry organizations have developed new metrics for calculating theenergy efficiency of data centers and providing guidance for data cen-ter design and operations. The Uptime Institute, for example, hasformulated a Site Infrastructure Energy Efficiency Ratio (SI-EER) toanalyze the relationship between total power supplied to the data cen-ter and the power that is supplied specifically to operate IT equipment.The total facilities power input divided by the IT equipment power drawhighlights the energy losses due to power conversion, heating/cooling,inefficient hardware, and other contributors. A SI-EER of 2 would indi-cate that for every 2 watts of energy input at the data center meter,only 1 watt is drives IT equipment. By the Uptime Institute's own mem-ber surveys, a SI-EER of 2.5 is not uncommon.

The New Data Center 5

Page 25: 87652141 the-new-data-center-brocade

Chapter 1: Supply and Demand

Likewise, The Green Grid, a global consortium of IT companies andprofessionals seeking to improve energy efficiency in data centers andbusiness computing ecosystems, has proposed a Data Center Infra-structure Efficiency (DCiE) ratio that divides the IT equipment powerdraw by the total data center facility power. This is essentially the recip-rocal of SI-EER, yielding a fractional ratio between the facilities powersupplied and the actual power draw for IT processing. With DCiE or SI-EER, however, it is not possible to achieve a 1:1 ratio that wouldenable every watt supplied to the data center to be productively usedfor IT processing. Cooling, air flow, humidity control, fire suppression,power distribution losses, backup power, lighting, and other factorsinevitably consume power. These supporting elements, however, canbe managed so that productive utilization of facilities power isincreased and IT processing itself is made more efficient via new tech-nologies and better product design.

Although SI-EER and DCiE are useful tools for a top-down analysis ofdata center efficiency, it is difficult to support these high-level metricswith real substantiating data. It is not sufficient, for example, to simplyuse the manufacturer's stated power figures for specific equipment,especially since manufacturer power ratings are often based on pro-jected peak usage and not normal operations. In addition, statedratings cannot account for hidden inefficiencies (for example, failure touse blanking panels in 19" racks) that periodically increase the overallpower draw depending on ambient conditions. The alternative is tometer major data center components to establish baselines of opera-tional power consumption. Although it may be feasible to design inmetering for a new data center deployment, it is more difficult for exist-ing environments. The ideal solution is for facilities and IT equipmentto have embedded power metering capability that can be solicited vianetwork management frameworks.

6 The New Data Center

Page 26: 87652141 the-new-data-center-brocade

High-level SI-EER and DCiE metrics focus on data center energy effi-ciency to power IT equipment. Unfortunately, this does not provideinformation on the energy efficiency or productivity of the IT equipmentitself. Suppose that there were two data centers with equivalent IT pro-ductivity, the one drawing 50 megawatts of power to drive 25megawatts of IT equipment would have the same DCiE as a data cen-ter drawing 10 megawatts to drive 5 megawatts of IT equipment. TheIT equipment energy efficiency delta could be due to a number of dif-ferent technology choices, including server virtualization, moreefficient power supplies and hardware design, data deduplication,tiered storage, storage virtualization, or other elements. The practicalusefulness of high-level metrics is therefore dependent on underlyingopportunities to increase energy efficiency in individual products andIT systems. Having a tighter ratio between facilities power input and IToutput is good, but lowering the overall input number is much better.

Data center energy efficiency has external implications as well. Cur-rently, data centers in the US alone require the equivalent of morethan 6 x 1000 megawatt power plants at a cost of approximately $3Bannually. Although that represents less than 2% of US power consump-tion, it is still a significant and growing number. Global data centerpower usage is more than twice the US figure. Given that all moderncommerce and information exchange is based ultimately on digitizeddata, the social cost in terms of energy consumption for IT processingis relatively modest. In addition, the spread of digital information andcommerce has already provided environmentally friendly benefits interms of electronic transactions for banking and finance, e-commercefor both retail and wholesale channels, remote online employment,electronic information retrieval, and other systems that have increasedproductivity and reduced the requirement for brick-and-mortar onsitecommercial transactions.

Data center managers, however, have little opportunity to bask in theglow of external efficiencies especially when energy costs continue toclimb and energy sourcing becomes problematic. Although $3B maybe a bargain for modern US society as a whole, achieving higher levelsof data center efficiency is now a prerequisite for meeting the contin-ued expansion of IT processing requirements. More applications andmore data means either more hardware and energy draw or the adop-tion of new data center technologies and practices that can achievemuch more with far less.

The New Data Center 7

Page 27: 87652141 the-new-data-center-brocade

Chapter 1: Supply and Demand

What differentiates the new data center architecture from the old maynot be obvious at first glance. There are, after all, still endless racks ofblinking lights, cabling, network infrastructure, storage arrays, andother familiar systems and a certain chill in the air. The differences arefound in the types of technologies deployed and the real estaterequired to house them.

As we will see in subsequent chapters, the new data center is anincreasingly virtualized environment. The static relationships betweenclients, applications, and data characteristic of conventional IT pro-cessing are being replaced with more flexible and mobile relationshipsthat enables IT resources to be dynamically allocated when and wherethey are needed most. The enabling infrastructure in the form of vir-tual servers, virtual fabrics, and virtual storage has the added benefitof reducing the physical footprint of IT and its accompanying energyconsumption. The new data center architecture thus reconciles theconflict between supply and demand by requiring less energy whilesupplying higher levels of IT productivity.

8 The New Data Center

Page 28: 87652141 the-new-data-center-brocade

The New Data Center

2

Running Hot and ColdTaking the heat

Dissipating the heat generated by IT equipment is a persistent prob-lem for data center operations. Cooling systems alone can account forone third to one half of data center energy consumption. Over-provi-sioning the thermal plant to accommodate current and futurerequirements leads to higher operational costs. Under-provisioning thethermal plant to reduce costs can negatively impact IT equipment,increase the risk of equipment outages, and disrupt ongoing businessoperations. Resolving heat generation issues therefore requires amulti-pronged approach to address (1) the source of heat from ITequipment, (2) the amount and type of cooling plant infrastructurerequired, and (3) the efficiency of air flow around equipment on thedata center floor to remove heat.

Energy, Power, and HeatIn common usage, energy is the capacity of a physical system to dowork and is expressed in standardized units of joules (the work doneby a force of one newton moving one meter along the line of directionof the force). Power, by contrast, is the rate at which energy isexpended over time, with one watt of power equal to one joule ofenergy per second. The power of a 100-watt light bulb, for example, isequivalent to 100 joules of energy per second, and the amount ofenergy consumed by the bulb over an hour would be 6000 joules.Because electrical systems often consume thousands of watts, theamount of energy consumed is expressed in kilowatt hours (kWh), andin fact the kilowatt hour is the preferred unit used by power companiesfor billing purposes. A system that requires 10,000 watts of powerwould thus consume and be billed for 10 kWh of energy for each hourof operation, or 240 kWh per day, or 87,600 kWh per year. The typicalAmerican household consumes 10,656 kWh per year.

9

Page 29: 87652141 the-new-data-center-brocade

Chapter 2: Running Hot and Cold

Medium and large IT hardware products are typically in the 1000+watt range. Fibre Channel directors, for example, can be as efficient as1300 watts (Brocade) to more than 3000 watts (competition). A largestorage array can be in the 6400 watt range. Although low-end serversmay be rated at ~200 watts, higher-end enterprise servers can be asmuch as 8000 watts. With the high population of servers and the req-uisite storage infrastructure to support them in the data center, plusthe typical 2x factor for the cooling plant energy draw, it is not difficultto understand why data center power bills keep escalating. Accordingto the Environmental Protection Agency (EPA), data centers in the UScollectively consume the energy equivalent of approximately 6 millionhouseholds, or about 61 billion kWh per year.

Energy consumption generates heat. While energy consumption isexpressed in watts, heat dissipation is expressed in BTU (British Ther-mal Units) per hour (h). One watt is approximately 3.4 BTU/h. BecauseBTUs quickly add up to tens or hundreds of thousands per hour incomplex systems, heat can also be expressed in therms, with onetherm equal to 100,000 BTU. Your household heating bill, for example,is often listed as therms averaged per day or billing period.

Environmental ParametersBecause data centers are closed environments, ambient temperatureand humidity must also be considered. ASHRAE Thermal Guidelinesfor Data Processing Environments provides best practices for main-taining proper ambient conditions for operating IT equipment withindata centers. Data centers typically run fairly cool at about 68 degreesFahrenheit and 50% relative humidity. While legacy mainframe sys-tems did require considerable cooling to remain within operationalnorms, open systems IT equipment is less demanding. Consequently,there has been a more recent trend to run data centers at higherambient temperatures, sometimes disturbingly referred to as“Speedo” mode data center operation. Although ASHRAE's guidelinespresent fairly broad allowable ranges of operation (50 to 90 degrees,20 to 80% relative humidity), recommended ranges are still somewhatnarrow (68 to 77 degrees, 40 to 55% relative humidity).

10 The New Data Center

Page 30: 87652141 the-new-data-center-brocade

Rationalizing IT Equipment Distribution

Rationalizing IT Equipment DistributionServers and network equipment are typically configured in standard19" (wide) racks and rack enclosures, in turn, are arranged for accessi-bility for cabling and servicing. Increasingly, however, the floor plan fordata center equipment distribution must also accommodate air flowfor equipment cooling. This requires that individual units be mountedin a rack for consistent air flow direction (all exhaust to the rear or allexhaust to the front) and that the rows of racks be arranged to exhaustinto a common space, called a hot aisle/cold aisle plan, as shown inFigure 3.

Figure 3. Hot aisle/cold aisle equipment floor plan.

A hot aisle/cold aisle floor plan provides greater cooling efficiency bydirecting cold to hot air flow for each equipment row into a commonaisle. Each cold aisle feeds cool air for two equipment rows while eachhot aisle allows exhaust for two equipment rows, thus enabling maxi-mum benefit for the hot/cold circulation infrastructure. Even greaterefficiency is achieved by deploying equipment with variable-speedfans.

Cold aisle

Equipment row

Hot aisle

Equipment row

Cold aisle

Air flow

Equipment row

Hot aisle

The New Data Center 11

Page 31: 87652141 the-new-data-center-brocade

Chapter 2: Running Hot and Cold

Figure 4. Variable speed fans enable more efficient distribution of cooling.

Variable speed fans increase or decrease their spin rate in response tochanges in equipment temperature. As shown in Figure 4, cold air flowinto equipment racks with constant speed fans favors the hardwaremounted in the lower equipment slots and thus nearer to the cold airfeed. Equipment mounted in the upper slots is heated by their ownpower draw as well as the heat exhaust from the lower tiers. Use ofvariable speed fans, by contrast, enables each unit to selectively applycooling as needed, with more even utilization of cooling throughout theequipment rack.

Research done by Michael Patterson and Annabelle Pratt of Intel lever-ages the hot aisle/cold aisle floor plan approach to create a metric formeasuring energy consumption of IT equipment. By convention, theenergy consumption of a unit of IT hardware can be measured physi-cally via use of metering equipment or approximated via use of themanufacturer's stated power rating (in watts or BTUs).

As shown in Figure 5 Patterson and Pratt incorporate both the energydraw of the equipment mounted within a rack and the associated hotaisle/cold aisle real estate required to cool the entire rack. This “workcell” u nit thus provides a more accurate description of what is actuallyrequired to power and cool IT equipment and, supposing the equip-ment (for example, servers) is uniform across a row, provides a usefulmultiplier for calculating total energy consumption of an entire row ofmounted hardware.

Server rack with constant speed fans Server rack with variable speed fans

Equipmentat bottomis cooler

Moreeven

cooling

12 The New Data Center

Page 32: 87652141 the-new-data-center-brocade

Rationalizing IT Equipment Distribution

Figure 5. The concept of work cell incorporates both equipment power draw and requisite cooling.

When energy was plentiful and cheap, it was often easy to overlook thebasic best practices for data center hardware deployment and the sim-ple remedies to correct inefficient air flow. Blanking plates, forexample, are used to cover unused rack or cabinet slots and thusenforce more efficient airflow within an individual rack. Blankingplates, however, are often ignored, especially when equipment is fre-quently moved or upgraded. Likewise, it is not uncommon to finddecommissioned equipment still racked up (and sometimes actuallypowered on). Racked but unused equipment can disrupt air flow withina cabinet and become a heat trap for heat generated by active hard-ware. In raised floor data centers, decommissioned cabling candisrupt cold air circulation and unsealed cable cutouts can result incontinuous and fruitless loss of cooling. Because the cooling plantitself represents such a significant share of data center energy use,even seemingly minor issues can quickly add up to major inefficien-cies and higher energy bills.

Cold aisle

Work cell

Equipment racks

Hot aisle

The New Data Center 13

Page 33: 87652141 the-new-data-center-brocade

Chapter 2: Running Hot and Cold

EconomizersTraditionally, data center cooling has been provided by large air condi-tioning systems (computer room air conditioning, or CRAC) that usedCFC (chlorofluorocarbon) or HCFC (hydrochlorofluorocarbon) refriger-ants. Since both CFCs and HCFCs are ozone depleting, currentsystems use ozone-friendly refrigerants to minimize broader environ-mental impact. Conventional CRAC systems, however, consumesignificant amounts of energy and may account for nearly half of adata center power bill. In addition, these systems are typically over-pro-visioned to accommodate data center growth and consequently incura higher operational expense than is justified for the required coolingcapacity.

For new data centers in temperate or colder latitudes, economizerscan provide part or all of the cooling requirement. Economizer technol-ogy dates to the mid-1800s but has seen a revival in response to risingenergy costs. As shown in Figure 6, an economizer (in this case, a dry-side economizer) is essentially a heat exchanger that leverages cooleroutside ambient air temperature to cool the equipment racks.

Figure 6. An economizer uses the lower ambient temperature of out-side air to provide cooling.

Use of outside air has its inherent problems. Data center equipment issensitive to particulates that can build up on circuit boards and con-tribute to heating issues. An economizer may therefore incorporateparticulate filters to scrub the external air before the air flow enters thedata center. In addition, external air may be too humid or too dry fordata center use. Integrated humidifiers and dehumidifiers can condi-tion the air flow to meet operational specifications for data center use.As stated above, ASHRAE recommends 40 to 55% relative humidity.

Outside air

Damper

Air return

Particulatefilter

Humidifier/dehumidifier

14 The New Data Center

Page 34: 87652141 the-new-data-center-brocade

Monitoring the Data Center Environment

Dry-side economizers depend on the external air supply temperatureto be sufficiently lower than the data center itself, and this may fluctu-ate seasonally. Wet-side economizers thus include cooling towers aspart of the design to further condition the air supply for data centeruse. Cooling towers present their own complications, which are tough,especially in more arid geographies where water resources are expen-sive and scarce. Ideally, economizers should leverage as muchrecyclable resources as possible to accomplish the task of coolingwhile reducing any collateral environmental impact.

Monitoring the Data Center EnvironmentBecause vendor wattage and BTU specifications may assume maxi-mum load conditions, using data sheet specifications or equipmentlabel declarations does not provide an accurate basis for calculatingequipment power draw or heat dissipation. An objective multi-pointmonitoring system for measuring heat and humidity throughout thedata center is really the only means to observe and proactivelyrespond to changes in the environment.

A number of monitoring options are available today. For example,some vendors are incorporating temperature probes into their equip-ment design to provide continuous reporting of heat levels viamanagement software. Some solutions provide rack-mountable sys-tems that include both temperature and humidity probes andmonitoring through a Web interface. Fujitsu offers a fiber optic systemthat leverages the affect of temperature on light propagation to pro-vide a multi-point probe using a single fiber optic cable strungthroughout equipment racks. Accuracy is reported to be within a halfdegree Celsius and within 1 meter of the measuring point. In addition,new monitoring software products can render a three-dimensionalview of temperature distribution across the entire data center, analo-gous to an infrared photo of a heat source.

Although monitoring systems add cost to data center design, they areinvaluable diagnostic tools for fine-tuning airflow and equipmentplacement to maximize cooling and keeping power and cooling coststo a minimum. Many monitoring systems can be retrofitted to existingdata center plants so that even older sites can leverage newtechnologies.

The New Data Center 15

Page 35: 87652141 the-new-data-center-brocade

Chapter 2: Running Hot and Cold

16 The New Data Center

Page 36: 87652141 the-new-data-center-brocade

The New Data Center

3

Doing More with LessLeveraging virtualization and blade server technologies

Of the three primary components of an IT data center infrastructure—servers, storage and network—servers are by far the most populousand have the highest energy impact. Servers represent approximatelyhalf of the IT equipment energy cost and about a quarter of the totaldata center power bill. Server technology has therefore been a primecandidate for regulation via EPA Energy Star and other market-driveninitiatives and has undergone a transformation in both hardware andsoftware. Server virtualization and blade server design, for example,are distinct technologies fulfilling different goals but together have amultiplying affect on server processing performance and energy effi-ciency. In addition, multi-core processors and multi-processormotherboards have dramatically increased server processing power ina more compact footprint.

VMs RebornThe concept of virtual machines dates back to mainframe days. Tomaximize the benefit of mainframe processing, a single physical sys-tem was logically partitioned into independent virtual machines. EachVM ran its own operating system and applications in isolation althoughthe processor and peripherals could be shared. In today's usage, VMstypically run on open systems servers and although direct-connectstorage is possible, shared storage on a SAN or NAS is the norm.Unlike previous mainframe implementations, today's virtualizationsoftware can support dozens of VMs on a single physical server. Typi-cally, 10 or fewer VM instances are run per physical platform althoughmore powerful server platforms can support 20 or more VMs.

17

Page 37: 87652141 the-new-data-center-brocade

Chapter 3: Doing More with Less

The benefits of server virtualization are as obvious as the potentialrisks. Running 10 VMs on a single server platform eliminates the needfor 9 additional servers with their associated cost, components, andaccompanying power draw and heat dissipation. For data centers withhundreds or thousands of servers, virtualization offers an immediatesolution for server sprawl and ever increasing costs.

Like any virtualization strategy, however, the logical separation of VMsmust be maintained and access to server memory and externalperipherals negotiated to prevent conflicts or errors. VMs on a singleplatform are hosted by a hypervisor layer which runs either directly(Type 1 or native) on the server hardware or on top of (Type 2 orhosted) the conventional operating system already running on theserver hardware.

Figure 7. A native or Type 1 hypervisor.

In a native Type 1 virtualization implementation, the hypervisor runsdirectly on the server hardware as shown in Figure 7. This type ofhypervisor must therefore support all CPU, memory, network and stor-age I/O traffic directly without the assistance of an underlyingoperating system. The hypervisor is consequently written to a specificCPU architecture (for open systems, typically an Intel x86 design) andassociated I/O. Clearly, one of the benefits of native hypervisors is thatoverall latency can be minimized as individual VMs perform the normalfunctions required by their applications. With the hypervisor directlymanaging hardware resources, it is also less vulnerable over time tocode changes or updates that might be required if an underlying OSwere used.

Application

OS

Application

OS

Application

OS

Serviceconsole

Hypervisor

Hardware

CPU Memory NIC Storage I/O

18 The New Data Center

Page 38: 87652141 the-new-data-center-brocade

VMs Reborn

Figure 8. A hosted or Type 2 hypervisor.

As shown in Figure 8, a hosted or Type 2 server virtualization solutionis installed on top of the host operating system. The advantage of thisapproach is that virtualization can be implemented on existing serversto more fully leverage existing processing power and support moreapplications in the same footprint. Given that the host OS and hypervi-sor layer inserts additional steps between the VMs and the lower levelhardware, this hosted implementation incurs more latency than nativehypervisors. On the other hand, hosted hypervisors can readily supportapplications with moderate performance requirements and stillachieve the objective of consolidating compute resources.

In both native and hosted hypervisor environments, the hypervisoroversees the creation and activity of its VMs to ensure that each VMhas its requisite resources and does not interfere with the activity ofother VMs. Without the proper management of shared memory tablesby the hypervisor, for example, one VM instance could easily crashanother. The hypervisor must also manage the software traps createdto intercept hardware calls made by the guest OS and provide theappropriate emulation of normal OS hardware access and I/O.

Because the hypervisor is now managing multiple virtual computers,secure access to the hypervisor itself must be maintained. Efforts tostandardize server virtualization management for stable and secureoperation are being led by the Distributed Management Task Force(DMTF) through its Virtualization Management Initiative (VMAN) andthrough collaborative efforts by virtualization vendors and partnercompanies.

Application

OS

Application

OS

Application

OS

Application

OS

Hypervisor

Host Operating System

Hardware

CPU Memory NIC Storage I/O

The New Data Center 19

Page 39: 87652141 the-new-data-center-brocade

Chapter 3: Doing More with Less

Server virtualization software is now available for a variety of CPUs,hardware platforms and operating systems. Adoption for mid-tier, mod-erate performance applications has been enabled by the availability ofeconomical dual-core CPUs and commodity rack-mount servers. High-performance requirements can be met with multi-CPU platforms opti-mized for shared processing. Although server virtualization hassteadily been gaining ground in large data centers, there has beensome reluctance to commit the most mission-critical applications toVM implementations. Consequently, mid-tier applications have beenfirst in line and as these deployments become more pervasive andproven, mission-critical applications will follow.

In addition to providing a viable means to consolidate server hardwareand reduce energy costs, server virtualization enables a degree ofmobility unachievable via conventional server management. Becausethe virtual machine is now detached from the underlying physical pro-cessing, memory, and I/O hardware, it is now possible to migrate avirtual machine from one hardware platform to another non-disrup-tively. If, for example, an application's performance is beginning toexceed the capabilities of its shared physical host, it can be migratedonto a less busy host or one that supports faster CPUs and I/O. Thisapplication agility that initially was just an unintended by-product ofmigrating virtual machines has become one of the compelling reasonsto invest in a virtual server solution. With ever-changing business,workload and application priorities, the ability to quickly shift process-ing resources where most needed is a competitive businessadvantage.

As discussed in more detail below, virtual machine mobility createsnew opportunities for automating application distribution within thevirtual server pool and implementing policy-based procedures toenforce priority handling of select applications over others. Communi-cation between the virtualization manager and the fabric via APIs, forexample, enable proactive response to potential traffic congestion orchanges in the state of the network infrastructure. This further simpli-fies management of application resources and ensures higheravailability.

20 The New Data Center

Page 40: 87652141 the-new-data-center-brocade

Blade Server Architecture

Blade Server ArchitectureServer consolidation in the new data center can also be achieved bydeploying blade server frames. The successful development of bladeserver architecture has been dependent on the steady increase in CPUprocessing power and solving basic problems around shared power,cooling, memory, network, storage, and I/O resources. Although bladeservers are commonly associated with server virtualization, these aredistinct technologies that have a multiplying benefit when combined.

Blade server design strips away all but the most essential dedicatedcomponents from the motherboard and provides shared assets aseither auxiliary special function blades or as part of the blade chassishardware. Consequently, the power consumption of each blade serveris dramatically reduced while power supply, fans and other elementsare shared with greater efficiency. A standard data center rack, forexample, can accommodate 42 1U conventional rack-mount servers,but 128 or more blade servers in the same space. A single rack ofblade servers can therefore house the equivalent of 3 racks of conven-tional servers; and although the cooling requirement for a fullypopulated blade server rack may be greater than for a conventionalserver rack, it is still less than the equivalent 3 racks that would other-wise be required.

As shown in Figure 9, a blade server architecture offloads all compo-nents that can be supplied by the chassis or by supporting specializedblades. The blade server itself is reduced to one or more CPUs andrequisite auxiliary logic. The degree of component offload and avail-ability of specialized blades varies from vendor to vendor, but the netresult is essentially the same. More processing power can now bepacked into a much smaller space and compute resources can bemanaged more efficiently.

Figure 9. A blade server architecture centralizes shared resources while reducing individual blade server elements.

Bus

CPU

Memory

Storage

NetworkI/O

Powersupply

Powersupply

ExternalSAN

storage

Fans

Fan Mem

oryN

etwork I/O

Brocade Access Gatew

ay

CPU / AU

X logicCPU

/ AUX logic

CPU / AU

X logic

CPU / AU

X logicCPU

/ AUX logic

CPU / AU

X logicCPU

/ AUX logicBus

AUX

The New Data Center 21

Page 41: 87652141 the-new-data-center-brocade

Chapter 3: Doing More with Less

By significantly reducing the number of discrete components per pro-cessing unit, the blade server architecture achieves higher efficienciesin manufacturing, reduced consumption of resources, streamlineddesign and reduced overall costs of provisioning and administration.The unique value-add of each vendor's offering may leverage hot-swapcapability, variable-speed fans, variable-speed CPUs, shared memoryblades and consolidated network access. Brocade has long workedwith the major blade server manufacturers to provide optimizedAccess Gateway and switch blades to centralize storage network capa-bility and the specific features of these products will be discussed inthe next section.

Although consolidation ratios of 3:1 are impressive, much higherserver consolidation is achieved when blade servers are combinedwith server virtualization software. A fully populated data center rackof 128 blade servers, for example, could support 10 or more virtualmachines per blade for a total of 1280 virtual servers. That would bethe equivalent of 30 racks (at 42 servers per rack) of conventional 1Urack-mount servers running one OS instance per server. From anenergy savings standpoint, that represents the elimination of over1000 power supplies, fan units, network adapters, and other elementsthat contribute to higher data center power bills and cooling load.

As a 2009 survey by blade.org shows, adoption of blade server tech-nology has been increasing in both large data centers and small/medium business (SMB) environments. Slightly less than half of thedata center respondents and approximately a third of SMB operationshave already implemented blade servers and over a third in both cate-gories have deployment plans in place. With limited data center realestate and increasing power costs squeezing data center budgets, thecombination of blade servers and server virtualization is fairly easy tojustify.

Brocade Server Virtualization SolutionsWhether on standalone servers or blade server frames, implementingserver virtualization has both upstream (client) and downstream (stor-age) impact in the data center. Because Brocade offers a full spectrumof products spanning LAN, WAN and SAN, it can help ensure that aserver virtualization deployment proactively addresses the newrequirements of both client and storage access. The value of a servervirtualization solution is thus amplified when combined with Brocade'snetwork technology.

22 The New Data Center

Page 42: 87652141 the-new-data-center-brocade

Brocade Server Virtualization Solutions

To maximize the benefits of network connectivity in a virtualized serverenvironment, Brocade has worked with the major server virtualizationsolutions and managers to deliver high performance, high availability,security, energy efficiency, and streamlined management end to end.The following Brocade solutions can enhance a server virtualizationdeployment and help eliminate potential bottlenecks:

Brocade High-Performance 8 Gbps HBAsIn a conventional server, a host bus adapter (HBA) provides storageaccess for a single operating system and its applications. In a virtualserver configuration, the HBA may be supporting 10 to 20 OSinstances, each running its own application. High performance istherefore essential for enabling multiple virtual machines to shareHBA ports without congestion. The Brocade 815 (single port) and 825HBAs (dual port, shown in Figure 10) provide 8 Gbps bandwidth and500,000 I/Os per second (IOPS) performance per port to ensure themaximum throughput for shared virtualized connectivity. BrocadeN_Port Trunking enables the 825 to deliver an unprecedented 16Gbps bandwidth (3200 MBps) and one million IOPS performance. Thisexceptional performance helps ensure that server virtualization con-figurations can expand over time to accommodate additional virtualmachines without impacting the continuous operation of existingapplications.

Figure 10. The Brocade 825 8 Gbps HBA supports N_Port Trunking for an aggregate 16 Gbps bandwidth and 1000 IOPS.

The New Data Center 23

Page 43: 87652141 the-new-data-center-brocade

Chapter 3: Doing More with Less

The Brocade 815 and 825 HBAs are further optimized for server virtu-alization connectivity by supporting advanced intelligent services thatenable end-to-end visibility and management. As discussed below,Brocade virtual machine SAN boot, N_Port ID Virtualization (NPIV) andintegrated Quality of Service (QoS) provide powerful tools for simplify-ing virtual machine deployments and providing proactive alerts directlyto server virtualization managers.

Brocade 8 Gbps Switch and Director PortsIn virtual server environments, the need for speed does not end at thenetwork or storage port. Because more traffic is now traversing fewerphysical links, building high-performance network infrastructures is aprerequisite for maintaining non-disruptive, high-performance virtualmachine traffic flows. Brocade's support of 8 Gbps ports on bothswitch and enterprise-class platforms enables customers to build high-performance, non-blocking storage fabrics that can scale from smallVM configurations to enterprise-class data center deployments.Designing high-performance fabrics ensures that applications runningon virtual machines are not exposed to bandwidth issues and canaccommodate high volume traffic patterns required for data backupand other applications.

Brocade Virtual Machine SAN BootFor both standalone physical servers and blade server environments,the ability to boot from the storage network greatly simplifies virtualmachine deployment and migration of VM instances from one serverto another. As shown in Figure 11, SAN boot centralizes managementof boot images and eliminates the need for local storage on each phys-ical server platform. When virtual machines are migrated from onehardware platform to another, the boot images can be readilyaccessed across the SAN via Brocade HBAs.

24 The New Data Center

Page 44: 87652141 the-new-data-center-brocade

Brocade Server Virtualization Solutions

Figure 11. SAN boot centralizes management of boot images and facilitates migration of virtual machines between hosts.

Brocade 815 and 825 HBAs provide the ability to automaticallyretrieve boot LUN parameters from a centralized fabric-based registry.This eliminates the error-prone manual host-based configurationscheme required by other HBA vendors. Brocade's SAN boot and bootLUN discovery facilitates migration of virtual machines from host tohost, removes the need for local storage and improves reliability andperformance.

Brocade N_Port ID Virtualization for Workload OptimizationIn a virtual server environment, the individual virtual machineinstances are unaware of physical ports since the underlying hardwarehas been abstracted by the hypervisor. This creates potential problemsfor identifying traffic flows from virtual machines through shared phys-ical ports. NPIV is an industry standard that enables multiple FibreChannel addresses to share a single physical Fibre Channel port. In aserver virtualization environment, NPIV allows each virtual machineinstance to have a unique World Wide Name (WWN) or virtual HBAport. This in turn provides a level of granularity for identifying each VMattached to the fabric for end-to-end monitoring, accounting, and con-figuration. Because the WWN is now bound to an individual virtualmachine, the WWN follows the VM when it is migrated to another plat-form. In addition, NPIV creates the linkage required for advancedservices such as QoS, security, and zoning as discussed in the nextsection.

Bootimages

Storagearrays

Servers

Direct-attachedstorage(DAS)

. . . . . .

. . . . . .Servers

Brocade825 HBAs

Boot images

SANswitches

The New Data Center 25

Page 45: 87652141 the-new-data-center-brocade

Chapter 3: Doing More with Less

Configuring Single Initiator/Target Zoning Brocade has been a pioneer in fabric-based zoning to segregate fabrictraffic and restrict visibility of storage resources to only authorizedhosts. As a recognized best practice for server to storage configura-tion, NPIV and single initiator/target zoning ensures that individualvirtual machines have access only to their designated storage assets.This feature minimizes configuration errors during VM migration andextends the management visibility of fabric connections to specific vir-tual machines.

Brocade End-to-End Quality of ServiceThe combination of NPIV and zoning functionality on Brocade HBAsand switches provides the foundation for higher-level fabric servicesincluding end-to-end QoS. Because the traffic flows from each virtualmachine can be identified by virtual WWN and segregated via zoning,each can be assigned a delivery priority (low, medium or high) that isenforced fabric-wide from the host connection to the storage port, asshown in Figure 12.

Figure 12. Brocade's QoS enforces traffic prioritization from the server HBA to the storage port across the fabric.

While some applications running on virtual machines are logical candi-dates for QoS prioritization (for example, SQL Server), Brocade's TopTalkers management feature can help identify which VM applicationsmay require priority treatment. Because Brocade end-to-end QoS is ulti-mately tied to the virtual machine's virtualized WWN address, the QoSassignment follows the VM if it is migrated from one hardware platform

HBA

App 4App 3App 2App 1

Default QoSpriority

is Medium

Virtual Channels technologyenables QoS at the ASIClevel in the HBA

Frame-level interleaving ofoutbound data maximizesinitiator link utilization

QoS Priorities High Medium Low

26 The New Data Center

Page 46: 87652141 the-new-data-center-brocade

Brocade Server Virtualization Solutions

to another. This feature ensures that applications enjoy non-disruptivedata access despite adds/moves and changes to the downstream envi-ronment and enables administrators to more easily fulfill client service-level agreements (SLAs).

Brocade LAN and SAN SecurityMost companies are now subject to government regulations that man-date the protection and security of customer data transactions. Planninga virtualization deployment must therefore also account for basic secu-rity mechanisms for both client and storage access. Brocade offers abroad spectrum of security solutions, including LAN and WAN-basedtechnologies and storage-specific SAN security features. For example,Brocade SecureIron products, shown in Figure 13, provide firewall trafficmanagement and LAN security to safeguard access from clients to vir-tual hosts on the IP network.

Figure 13. Brocade SecureIron switches provide firewall traffic man-agement and LAN security for client access to virtual server clusters.

Brocade SAN security features include authentication via access controllists (ACLs) and role-based access control (RBAC) as well as securitymechanisms for authenticating connectivity of switch ports and devicesto fabrics. In addition, the Brocade Encryption Switch, shown inFigure 14, and FS8-18 Encryption Blade for the Brocade DCX Backboneplatform provide high-performance (96 Gbps) data encryption for data-at-rest. Brocade's security environment thus protects data-in-flight fromclient to virtual host as well as data written to disk across the SAN.

Figure 14. The Brocade Encryption Switch provides high-performance data encryption to safeguard data written to disk or tape.

The New Data Center 27

Page 47: 87652141 the-new-data-center-brocade

Chapter 3: Doing More with Less

Brocade Access Gateway for Blade Frames Server virtualization software can be installed on conventional serverplatforms or blade server frames. Blade server form factors offer thehighest density for consolidating IT processing in the data center andleverage shared resources across the backplane. To optimize storageaccess from blade server frames, Brocade has partnered with bladeserver providers to create high-performance, high-availability AccessGateway blades for Fibre Channel connectivity to the SAN. BrocadeAccess Gateway technology leverages NPIV to simplify virtual machineaddressing and F_Port Trunking for high utilization and automatic linkfailover. By integrating SAN connectivity into a virtualized blade serverchassis, Brocade helps to streamline deployment and simplify manage-ment while reducing overall costs.

The Energy-Efficient Brocade DCX Backbone Platform for ConsolidationWith 4x the performance and over 10x the energy efficiency of otherSAN directors, the Brocade DCX delivers the high performance requiredfor virtual server implementation and can accommodate growth in VMenvironments in a compact footprint. The Brocade DCX supports 384ports of 8 Gbps for a total of 3 Tbps chassis bandwidth. Ultra-high-speedinter-chassis links (ICLs) allow further expansion of the SAN core forscaling to meet the requirements of very large server virtualizationdeployments. The Brocade DCX is also designed to non-disruptively inte-grate Fibre Channel over Ethernet (FCoE) and Data Center Bridging(DCB) for future virtual server connectivity. The Brocade DCX is alsoavailable in a 192-port configuration (as the Brocade DCX-4S) to supportmedium VM configurations, while providing the same high availability,performance, and advanced SAN services.

The Brocade DCX's Adaptive Networking services for QoS, ingress ratelimiting, congestion detection, and management ensure that trafficstreams from virtual machines are proactively managed throughout thefabric and accommodate the varying requirements of upper-layer busi-ness applications. Adaptive Networking services provide greater agilityin managing application workloads as they migrate between physicalservers.

28 The New Data Center

Page 48: 87652141 the-new-data-center-brocade

Brocade Server Virtualization Solutions

Enhanced and Secure Client Access with Brocade LAN SolutionsBrocade offers a full line of sophisticated LAN switches and routers forEthernet and IP traffic from Layer 2/3 to Layer 4–7 application switch-ing. This product suite is the natural complement to Brocade's robustSAN products and enables customers to build full-featured and securenetworks end to end. As with the Brocade DCX architecture for SANs,Brocade BigIron RX, shown in Figure 15, and FastIron SuperX switchesincorporate best-in-class functionality and low power consumption todeliver high-performance core switching for data center LAN backbones.

Figure 15. Brocade BigIron RX platforms offer high-performance Layer 2/3 switching in three compact, energy-efficient form factors.

Brocade edge switches with Power over Ethernet (PoE) support enablecustomers to integrate a wide variety of IP business applications, includ-ing voice over IP (VoIP), wireless access points, and security monitoring.Brocade SecureIron switches bring advanced security protection for cli-ent access into virtualized server clusters, while Brocade ServerIronswitches provide Layer 4–7 application switching and load balancing.Brocade LAN solutions provide up to 10 Gbps throughput per port andso can accommodate the higher traffic loads typical of virtual machineenvironments.

Brocade Industry Standard SMI-S Monitoring Virtual server deployments dramatically increase the number of dataflows and requisite bandwidth per physical server or blade server.Because server virtualization platforms can support dynamic migrationof application workloads between physical servers, complex traffic pat-terns are created and unexpected congestion can occur. Thiscomplicates server management and can impact performance andavailability. Brocade can proactively address these issues by integratingcommunication between Brocade intelligent fabric services with VM

The New Data Center 29

Page 49: 87652141 the-new-data-center-brocade

Chapter 3: Doing More with Less

managers. As the fabric monitors potential congestion on a per-VMbasis, it can proactively alert virtual machine management that a work-load should be migrated to a less utilized physical link. Because thisdiagnostic functionality is fine tuned to the workflows of each VM,changes can be restricted to only the affected VM instances.

Open management standards such as the Storage Management Initia-tive (SMI) are the appropriate tools for integrating virtualizationmanagement platforms with fabric management services. As one of theoriginal contributors to the SMI-S specification, Brocade is uniquely posi-tioned to provide a truly open systems solution end to end. In addition,configuration management, capacity planning, SLA policies, and virtualmachine provisioning can be integrated with Brocade fabric servicessuch as Adaptive Networking, encryption and security policies.

Brocade Professional Services Even large companies who want to take advantage of the cost savings,consolidation and lower energy consumption characteristic of server vir-tualization technology may not have the staff or in-house expertise toplan and implement a server virtualization project. Many organizationsfail to consider the overall impact of virtualization on the data centerand that in turn can lead to degraded application performance, inade-quate data protection, and increased management complexity. BecauseBrocade technology is ubiquitous in the vast majority of data centersworldwide and Brocade has years of experience in the most mission-crit-ical IT environments, it can provide a wealth of practical knowledge andinsight into the key issues surrounding client-to-server and server-to-storage data access. Brocade Professional Services has helped hun-dreds of customers upgrade to virtualized server infrastructures andprovides a spectrum of services from virtual server assessments, auditsand planning to end-to-end deployment and operation. A well-conceivedand executed virtualization strategy can ensure that a virtual machinedeployment achieves its budgetary goals and fulfills the prime directiveto do far more with much less.

30 The New Data Center

Page 50: 87652141 the-new-data-center-brocade

FCoE and Server Virtualization

FCoE and Server VirtualizationFibre Channel over Ethernet is an optional storage network interconnectfor both conventional and virtualized server environments. As a meansto encapsulate Fibre Channel frames in Ethernet, FCoE enables a simpli-fied cabling solution for reducing the number of network and storageinterfaces per server attachment. The combined network and storageconnection is now provided by a converged network adapter (CNA), asshown in Figure 16.

Figure 16. FCoE simplifies the server cable plant by reducing the num-ber of network interfaces required for client, peer-to-peer, and storage access.

Given the more rigorous requirements for storage data handling andperformance, FCoE is not intended to run on conventional Ethernet net-works. In order to replicate the low latency, deterministic delivery, andhigh performance of traditional Fibre Channel, FCoE is best supportedon a new, hardened form of Ethernet known as Converged EnhancedEthernet (CEE), or Data Center Bridging (DCB), at 10 Gbps. Without theenhancements of DCB, standard Ethernet is too unreliable to supporthigh-performance block storage transactions. Unlike conventional Ether-net, DCB provides much more robust congestion management and high-availability features characteristic of data center Fibre Channel.

DCB replicates Fibre Channel's buffer-to-buffer credit flow control func-tionality via priority-based flow control (PFC) using 802.1Qbb pauseframes. Instead of buffer credits, pause quanta are used to restrict traf-fic for a given period to relieve network congestion and avoid droppedframes. To accommodate the larger payload of Fibre Channel frames,DCB-enabled switches must also support jumbo frames so that entireFibre Channel frames can be encapsulated in each Ethernet transmis-sion. Other standards initiatives such as TRILL (TransparentInterconnect for Lot of Links) are being developed to enable multiplepathing through DCB-switched infrastructures.

FC traffic

FC traffic

Ethernet traffic

Ethernet traffic

FCoE andEthernet

The New Data Center 31

Page 51: 87652141 the-new-data-center-brocade

Chapter 3: Doing More with Less

FCoE is not a replacement for conventional Fibre Channel but is anextension of Fibre Channel over a different link layer transport. Enablingan enhanced Ethernet to carry both Fibre Channel storage data as wellas other data types, for example, file data, Remote Direct MemoryAccess (RDMA), LAN traffic, and VoIP, allows customers to simplifyserver connectivity and still retain the performance and reliabilityrequired for storage transactions. Instead of provisioning a server withdual-redundant Ethernet and Fibre Channel ports (a total of four ports),servers can be configured with two DCB-enabled 10 Gigabit Ethernet(GbE) ports. For blade server installations, in particular, this reduction inthe number of interfaces greatly simplifies deployment and ongoingmanagement of the cable plant.

The FCoE initiative has been developed in the ANSI T11 Technical Com-mittee, which deals with FC-specific issues and is included in a newFibre Channel Backbone Generation 5 (FC-BB-5) specification. BecauseFCoE takes advantage of further enhancements to Ethernet, close col-laboration has been required between ANSI T11 and the Institute ofElectrical and Electronics Engineers (IEEE), which governs Ethernet andthe new DCB standards.

Storage access is provided by an FCoE-capable blade in a director chas-sis (end of row) or by a dedicated FCoE switch (top of rack), as shown inFigure 17.

Figure 17. An FCoE top-of-rack solution provides both DCB and Fibre Channel ports and provides protocol conversion to the data center SAN.

IP

FC

FCoE switch

Serverswith CNAs

SAN

LAN

32 The New Data Center

Page 52: 87652141 the-new-data-center-brocade

FCoE and Server Virtualization

In this example, the client, peer-to-peer, and block storage traffic share acommon 10 Gbps network interface. The FCoE switch acts as a FibreChannel Forwarder (FCF) and converts FCoE frames into conventionalFibre Channel frames for redirection to the fabric. Peer-to-peer or clus-tering traffic between servers in the same rack is simply switched atLayer 2 or 3, and client traffic is redirected via the LAN.

Like many new technologies, FCoE is often overhyped as a cure-all forpervasive IT ills. The benefit of streamlining server connectivity, however,should be balanced against the cost of deployment and the availabilityof value-added features that simplify management and administration.As an original contributor to the FCoE specification, Brocade hasdesigned FCoE products that integrate with existing infrastructures sothat the advantages of FCoE can be realized without adversely impact-ing other operations. Brocade offers the 1010 (single port) and 1020(dual port) CNAs, shown in Figure 18, at 10 Gbps DCB per port. Fromthe host standpoint, the FCoE functionality appears as a conventionalFibre Channel HBA.

Figure 18. Brocade 1010 and 1020 CNAs and the Brocade 8000 Switch facilitate a compact, high-performance FCoE deployment.

The Brocade 8000 Switch provides top-of-rack connectivity for serverswith 24 ports of 10 Gbps DCB and 8 ports of 8 Gbps Fibre Channel.Fibre Channel ports support trunking for a total of 64 Gbps bandwidth,while the 10 Gbps DCB ports support standard Link Aggregation ControlProtocol (LACP). Fibre Channel connectivity can be directly to storageend-devices or to existing fabrics, enabling greater flexibility for allocat-ing storage assets to hosts.

The New Data Center 33

Page 53: 87652141 the-new-data-center-brocade

Chapter 3: Doing More with Less

34 The New Data Center

Page 54: 87652141 the-new-data-center-brocade

The New Data Center

4

Into the PoolTranscending physical asset management with storage virtualization

Server virtualization achieves greater asset utilization by supportingmultiple instances of discrete operating systems and applications on asingle hardware platform. Storage virtualization, by contrast, providesgreater asset utilization by treating multiple physical platforms as asingle virtual asset or pool. Consequently, although storage virtualiza-tion does not provide a comparable direct benefit in terms of reducedfootprint or energy consumption in the data center, it does enable asubstantial benefit in productive use of existing storage capacity. Thisin turn often reduces the need to deploy new storage arrays, and soprovides an indirect benefit in terms of continued acquisition costs,deployment, management, and energy consumption.

Optimizing Storage Capacity Utilization in the Data Center Storage administrators typically manage multiple storage arrays, oftenfrom different vendors with the unique characteristics of each system.Because servers are bound to Logical Unit Numbers (LUNs) in specificstorage arrays, high-volume applications may suffer from over-utiliza-tion of storage capacity while low-volume applications under-utilizetheir storage targets.

35

Page 55: 87652141 the-new-data-center-brocade

Chapter 4: Into the Pool

Figure 19. Conventional storage configurations often result in over- and under-utilization of storage capacity across multiple storage arrays.

As shown in Figure 19, the uneven utilization of storage capacityacross multiple arrays puts some applications at risk of running out ofdisk space, while neighboring arrays still have excess idle capacity.This problem is exacerbated by server virtualization, since each physi-cal server now supports multiple virtual machines with additionalstorage LUNs and a more dynamic utilization of storage space. Thehard-coded assignment of storage capacity on specific storage arraysto individual servers or VMs is too inflexible to meet the requirementsof the more fluid IT environments characteristic of today's datacenters.

Storage virtualization solves this problem by inserting an abstractionlayer between the server farm and the downstream physical storagetargets. This abstraction layer can be supported on the host, the stor-age controller, within the fabric, or on dedicated virtualizationappliances.

Server 1 Server 2 Server 3

LUN 1LUN 8

Array A Array C

Array B

LUN 43 LUN 22 LUN 5 LUN 55

SAN

Physical storage

36 The New Data Center

Page 56: 87652141 the-new-data-center-brocade

Optimizing Storage Capacity Utilization in the Data Center

Figure 20. Storage virtualization aggregates the total storage capacity of multiple physical arrays into a single virtual pool.

As illustrated in Figure 20, storage virtualization breaks the physicalassignment between servers and their target LUNs. The storagecapacity of each physical storage system is now assigned to a virtualstorage pool from which virtual LUNs (for example, LUNs 1 through 6at the top of the figure) can be created and dynamically assigned toservers. Because the availability of storage capacity is now no longerrestricted to individual storage arrays, LUN creation and sizing isdependent only on the total capacity of the virtual pool. This enablesmore efficient utilization of the aggregate storage space and facilitatesthe creation and management of dynamic volumes that can be sizedto changing application requirements.

In addition to enabling more efficient use of storage assets, theabstraction layer provided by storage virtualization creates a homoge-nous view of storage. The physical arrays shown in Figure 20, forexample, could be from any vendor and have proprietary value-addedfeatures. Once LUNs are created and assigned to the storage pool,however, vendor-specific functionality is invisible to the servers. Fromthe server perspective, the virtual storage pool is one large genericstorage system. Storage virtualization thus facilitates sharing of stor-age capacity among systems that would otherwise be incompatiblewith each other.

Virtualized Storage Pool

LUN 2LUN 1

Array A Array CArray B

LUN 3 LUN 4 LUN 5 LUN 6

SAN

LUN 1LUN 8 LUN 43 LUN 22LUN 2 LUN 5LUN 12 LUN 55

Server 1 Server 2 Server 3

Physical storage

The New Data Center 37

Page 57: 87652141 the-new-data-center-brocade

Chapter 4: Into the Pool

As with all virtualization solutions, masking the underlying complexityof physical systems does not make that complexity disappear. Instead,the abstraction layer provided by virtualization software and hardwarelogic (the virtualization engine) must assume responsibility for errorsor changes that occur at the physical layer. In the case of storage virtu-alization specifically, management of backend complexity centersprimarily on the maintenance of the metadata mapping required tocorrelate virtual storage addresses to real ones, as shown inFigure 21.

Figure 21. The virtualization abstraction layer provides virtual targets to real hosts and virtual hosts to real targets.

Storage virtualization proxies virtual targets (storage) and virtual initia-tors (servers) so that real initiators and targets can connect to thestorage pool without modification using conventional SCSI commands.The relationship between virtual and real storage LUN assignment ismaintained by the metadata map. A virtual LUN of 500 GB, for exam-ple, may map to storage capacity spread across several physicalarrays. Loss of the metadata mapping would mean loss of access tothe real data. A storage virtualization solution must therefore guaran-tee the integrity of the metadata mapping and provide safeguards inthe form of replication and synchronization of metadata map copies.

Real targets

Virtualtarget

Virtualinitiator

Virtualizationengine

Real intiators

Metadatamap

38 The New Data Center

Page 58: 87652141 the-new-data-center-brocade

Building on a Storage Virtualization Foundation

As a data center best practice, creation of storage pools from multiplephysical storage arrays should be implemented by storage class. High-end RAID arrays contribute to one virtual pool; lower performancearrays should be assigned to a separate pool. Aggregating like assetsin the same pool ensures consistent performance and comparableavailability for all virtual LUNs and thus minimizes problematic incon-sistencies among disparate systems. In addition, there are benefits inmaintaining separate classes of virtualized storage systems for appli-cations such as lifecycle management as will be discussed in the nextsection.

Building on a Storage Virtualization Foundation Storage virtualization is an enabling technology for higher levels ofdata management and data protection and facilitates centralizedadministration and automation of storage operations. Vendor litera-ture on storage virtualization is consequently often linked t o snapshottechnology for data protection, replication for disaster recovery, virtualtape backup, data migration, and information lifecycle management(ILM). Once storage assets have been vendor-neutralized and pooledvia virtualization, it is easier to overlay advanced storage services thatare not dependent on vendor proprietary functionality. Data replicationfor remote disaster recovery, for example, no longer depends on a ven-dor-specific application and licensing but can be executed via a third-party solution.

One of the central challenges of next-generation data center design isto align infrastructure to application requirements. In the end, it'sreally about the upper-layer business applications, their availabilityand performance, and safeguarding the data they generate and pro-cess. For data storage, aligning infrastructure to applications requiresa more flexible approach to the handling and maintenance of dataassets as the business value of the data itself changes over time. Asshown in Figure 22, information lifecycle management can leveragevirtualized storage tiers to pair the cost of virtual storage containers tothe value of the data they contain.

Providing that each virtual storage tier is composed of a similar classof products, each tier represents different performance and availabil-ity characteristics, burdened cost of storage, and energy consumption.

The New Data Center 39

Page 59: 87652141 the-new-data-center-brocade

Chapter 4: Into the Pool

Figure 22. Leveraging classes of storage to align data storage to the business value of data over time.

By migrating data from one level to another as its immediate businessvalue declines, capacity on high-value systems is freed to accommo-date new active transactions. Business practice or regulatorycompliance, however, can require that migrated data remain accessi-ble within certain time frames. Tier 2 and 3 classes may not have theperformance and 99.999% availability of Tier 1 systems but still pro-vide adequate accessibility before the data can finally be retired totape. In addition, if each tier is a virtual storage pool, maximum utiliza-tion of the storage capacity of a tier can help reduce overall costs andmore readily accommodate the growth of aged data without the addi-tion of new disk drives.

Establishing tiers of virtual storage pools by storage class also pro-vides the foundation for automating data migration from one level toanother over time. Policy-based data migration can be triggered by anumber of criteria, including frequency of access to specific data sets,the age of the data, flagging transactions as completed, or appendingmetadata to indicate data status. Reducing or eliminating the need formanual operator intervention can significantly reduce administrativecosts and enhance the return on investment (ROI) of a virtualizationdeployment.

Tier 1

10x

High

Class of storage

Burdened cost per GB

Value of data

Tier 2

4x

Moderate

Tier 1

10x

Low

Tier 4

0.5x

Archive

40 The New Data Center

Page 60: 87652141 the-new-data-center-brocade

Centralizing Storage Virtualization from the Fabric

Centralizing Storage Virtualization from the Fabric Although storage virtualization can be implemented on host systems,storage controllers, dedicated appliances, or within the fabric via direc-tors or switches, there are trade-offs for each solution in terms ofperformance and flexibility. Because host-based storage virtualizationis deployed per server, for example, it incurs greater overhead in termsof administration and consumes CPU cycles on each host. Dedicatedstorage virtualization appliances are typically deployed between multi-ple hosts and their storage targets, making it difficult to scale to largerconfigurations without performance and availability issues. Imple-menting storage virtualization on the storage array controllers is aviable alternative, providing that the vendor can accommodate hetero-geneous systems for multi-vendor environments. Because all storagedata flows through the storage network, or fabric, however, fabric-based storage virtualization has been a compelling solution for cen-tralizing the virtualization function and enabling more flexibility inscaling and deployment.

The central challenge for fabric-based virtualization is to achieve thehighest performance while maintaining the integrity of metadata map-ping and exception handling. Fabric-based virtualization is nowcodified in an ANSI/INCITS T11.5 standard, which provides APIs forcommunication between virtualization software and the switching ele-ments embedded in a switch or director blade. The Fabric ApplicationInterface Standard (FAIS) separates the control path to a virtualizationengine (typically external to the switch) and the data paths betweeninitiators and targets. As shown in Figure 23, the Control Path Proces-sor (CPP) represents the virtualization intelligence layer and the FAISinterface, while the Data Path Controller (DPC) ensures that the properconnectivity is established between the servers, storage ports, and thevirtual volume created via the CPP. Exceptions are forwarded to theCPP, freeing the DPC to continue processing valid transactions.

The New Data Center 41

Page 61: 87652141 the-new-data-center-brocade

Chapter 4: Into the Pool

Figure 23. FAIS splits the control and data paths for more efficient execution of metadata mapping between virtual storage and servers.

Because the DPC function can be executed in an ASIC at the switchlevel, it is possible to achieve very high performance without impactingupper-layer applications. This is a significant benefit over host-basedand appliance-based solutions. And because communication betweenthe virtualization engine and the switch is supported by standards-based APIs, it is possible to run a variety of virtualization softwaresolutions.

The central role a switch plays in providing connectivity between serv-ers and storage and the FAIS-enabled ability to execute metadatamapping for virtualization also creates new opportunities for fabric-based services such as mirroring or data migration. With high perfor-mance and support for heterogeneous storage systems, fabric-basedservices can be implemented with much greater transparency thanalternate approaches and can scale over time to larger deployments.

Storage pool

Virtualizationengine

Control path

42 The New Data Center

Page 62: 87652141 the-new-data-center-brocade

Brocade Fabric-based Storage Virtualization

Brocade Fabric-based Storage VirtualizationEngineered to the ANSI/INCITS T11 FAIS specification, the BrocadeFA4-18 Application Blade provides high-performance storage virtual-ization for the Brocade 48000 Director and Brocade DCX Backbone.

NOTE: Information for the Brocade DCX Backbone also includes theBrocade DCX-4S Backbone unless otherwise noted.

Figure 24. The Brocade FA4-18 Application Blade provides line-speed metadata map execution for non-disruptive storage pooling, mirroring and data migration.

As shown in Figure 24, compatibility with both the Brocade 48000 andBrocade DCX chassis enables the Brocade FA4-18 Application Blade toextend the benefits of Brocade energy-efficient design and high band-width to advanced fabric services without requiring a separateenclosure. Interoperability with existing SAN infrastructures amplifiesthis advantage, since any server connected to the SAN can be directedto the FA4-18 blade for virtualization services. Line-speed metadatamapping is achieved through purpose-built components instead ofrelying on general-purpose processors that other vendors use.

Storage pools

Servers

Brocade DCX/DCX-4Sor Brocade 48000

with the Brocade FA4-18Application Blade

Virtualizationcontol processor

The New Data Center 43

Page 63: 87652141 the-new-data-center-brocade

Chapter 4: Into the Pool

The virtualization application is provided by third-party and partnersolutions, including EMC Invista software. For Invista specifically, aControl Path Cluster (CPC) consisting of two processor platformsattached to the FA4-18 provides high availability and failover in theevent of link or unit failure. Initial configuration of storage pools is per-formed on the Invista CPC and downloaded to the FA-18 for execution.Because the virtualization functionality is driven in the fabric andunder configuration control of the CPC, this solution requires no hostmiddleware or host CPU cycles for attached servers.

For new data center design or upgrade, storage virtualization is a natu-ral complement to server virtualization. Fabric-based storagevirtualization offers the added advantage of flexibility, performance,and transparency to both servers and storage systems as well asenhanced control over the virtual environment.

44 The New Data Center

Page 64: 87652141 the-new-data-center-brocade

The New Data Center

5

Weaving a New Data Center FabricIntelligent design in the storage infrastructure

In the early days of SAN adoption, storage networks tended to evolvespontaneously in reaction to new requirements for additional ports toaccommodate new servers and storage devices. In practice, thismeant acquiring new fabric switches and joining them to an existingfabric via E_Port connection, typically in a mesh configuration to pro-vide alternate switch-to-switch links. As a result, data centers graduallybuilt very large and complex storage networks composed of 16- or 32-port Fibre Channel switches. At a certain critical mass, these largemulti-switch fabrics became problematic and vulnerable to fabric-widedisruptions through state change notification (SCN) broadcasts or fab-ric reconfigurations. For large data centers in particular, the responsewas to begin consolidating the fabric by deploying high-port-countFibre Channel directors at the core and using the 16- or 32-portswitches at the edge for device fan-out.

Consolidation of the fabric brings several concrete benefits, includinggreater stability, high performance, and the ability to accommodategrowth in ports without excessive dependence on inter-switch links(ISLs) to provide connectivity. A well-conceived core/edge SAN designcan provide optimum pathing between groups of servers and storageports with similar performance requirements, while simplifying man-agement of SAN traffic. The concept of a managed unit of SAN ispredicated on the proper sizing of a fabric configuration to meet bothconnectivity and manageability requirements. Keeping the SAN designwithin rational boundaries, however, is now facilitated with new stan-dards and features that bring more power and intelligence to thefabric.

45

Page 65: 87652141 the-new-data-center-brocade

Chapter 5: Weaving a New Data Center Fabric

As with server and storage consolidation, fabric consolidation is alsodriven by the need to reduce the number of physical elements in thedata center and their associated power requirements. Each additionalswitch means additional redundant power supplies, fans, heat genera-tion, cooling load, and data center real estate. As with blade serverframes, high-port-density platforms such as the Brocade DCX Back-bone enable more concentrated productivity in a smaller footprint andwith a lower total energy budget. The trend in new data center designis therefore to architect the entire storage infrastructure for minimalphysical and energy impact while accommodating inevitable growthover time. Although lower-port-count switches are still viable solutionsfor departmental, small-to-medium size business (SMB) and fan-outapplications, Brocade backbones are now the cornerstone for opti-mized data center fabric designs.

Better Fewer but BetterStorage area networks substantially differ from conventional datacommunications networks in a number of ways. A typical LAN, forexample, is based on peer-to-peer communications with all end-points(nodes) sharing equal access. The underlying assumption is that anynode can communicate with any other node at any time. A SAN, bycontrast, cannot rely on peer-to-peer connectivity since some nodesare active (initiators/servers) and others are passive (storage targets).Storage systems do not typically communicate with each other (withthe exception of disk-to-disk data replication or array-based virtualiza-tion) across the SAN. Targets also do not initiate transactions, butpassively wait for an initiator to access them. Consequently, storagenetworks must provide a range of unique services to facilitate discov-ery of storage targets by servers, restrict access to only authorizedserver/target pairs, zone or segregate traffic between designatedgroups of servers and their targets, and provide notifications whenstorage assets enter or depart the fabric. These services are notrequired in conventional data communication networks. In addition,storage traffic requires deterministic delivery, whereas LAN and WANprotocols are typically best-effort delivery systems.

These distinctions play a central role in the proper design of data cen-ter SANs. Unfortunately, some vendors fail to appreciate the uniquerequirements of storage environments and recommend what areessentially network-centric architectures instead of the more appropri-ate storage-centric approach. Applying a network-centric design tostorage inevitably results in a failure to provide adequate safeguardsfor storage traffic and a greater vulnerability to inefficiencies, disrup-tion, or poor performance. Brocade's strategy is to promote storage-

46 The New Data Center

Page 66: 87652141 the-new-data-center-brocade

Better Fewer but Better

centric SAN designs that more readily accommodate the unique andmore demanding requirements of storage traffic and ensure stableand highly available connectivity between servers and storagesystems.

A storage-centric fabric design is facilitated by concentrating key cor-porate storage elements at the core, while accommodating serveraccess and departmental storage at the edge. As shown in Figure 25,the SAN core can be built with high-port-density backbone platforms.With up to 384 x 8 Gbps ports in a single chassis or up to 768 ports ina dual-chassis configuration, the core layer can support hundreds ofstorage ports and, depending on the appropriate fan-in ratio, thou-sands of servers in a single high-performance solution. The BrocadeDCX Backbone, a 14U chassis with eight vertical blade slots is alsoavailable in a 192-port 8U Brocade DCX-4S with four horizontal bladeslots—with compatibility for any Brocade DCX blade. Because two oreven three backbone chassis can be deployed in a single 19" rack oradjacent racks, real estate is kept to a minimum. Power consumptionof less than a half watt per Gbps provides over 10x the energy effi-ciency of comparable enterprise-class products. Doing more with lessis thus realized through compact product design and engineeringpower efficiency down to the port

Figure 25. A storage-centric core/edge topology provides flexibility in deploying servers and storage assets while accommodating growth over time.

Edgeswitches

BrocadeDCX core

Primarycorporatestorage

Servers

High-performance

servers

Departmentalstorage

The New Data Center 47

Page 67: 87652141 the-new-data-center-brocade

Chapter 5: Weaving a New Data Center Fabric

In this example, servers and storage assets are configured to bestmeet the performance and traffic requirements of specific businessapplications. Mission-critical servers with high-performance require-ments, for example, can be attached directly to the core layer toprovide the optimum path to primary storage. Departmental storagecan be deployed at the edge layer, while still enabling servers toaccess centralized storage resources. With 8 Gbps port connectivityand the ability to trunk multiple inter-switch links between the edgeand core, this design provides the flexibility to support different band-width and performance needs for a wide range of businessapplications in a single coherent architecture.

In terms of data center consolidation, a single-rack, dual-chassis Bro-cade DCX configuration of 768 ports can replace 48 x 16-port or 24 x32-port switches, providing a much more efficient use of fabricaddress space, centralized management, and microcode version con-trol and a dramatic decrease in maintenance overhead, energyconsumption, and cable complexity. Consequently, current data centerbest practices for storage consolidation now incorporate fabric consol-idation as a foundation for shrinking the hardware footprint and itsassociated energy costs. In addition, because the Brocade DCX 8 Gbpsport blades are backward compatible with 1, 2, and 4 Gbps speeds,existing devices can be integrated into a new consolidated design with-out expensive upgrades.

Intelligent by DesignThe new data center fabric is characterized by high port density, com-pact footprint, low energy costs, and streamlined management, butthe most significant differentiating features compared to conventionalSANs revolve around increased intelligence for storage data transport.New functionality that streamlines data delivery, automates dataflows, and adapts to changed network conditions both ensures stableoperation and reduces the need for manual intervention and adminis-trative oversight. Brocade has developed a number of intelligent fabriccapabilities under the umbrella term of Adaptive Networking servicesto streamline fabric operations.

Large complex SANs, for example, typically support a wide variety ofbusiness applications, ranging from high-performance and mission-critical to moderate-performance requirements. In addition, storage-specific applications such as tape backup may share the same infra-structure as production applications. If all storage traffic types weretreated with the same priority, the potential would exist for congestionand disruption of high-value applications impacted negatively by the

48 The New Data Center

Page 68: 87652141 the-new-data-center-brocade

Intelligent by Design

traffic load of moderate-value applications. Brocade addresses thisproblem via a quality of service mechanism, which enables the storageadministrator to assign priority values to different applications.

Figure 26. Brocade QoS gives preferential treatment to high-value applications through the fabric to ensure reliable delivery.

As shown in Figure 26, applications running on conventional or virtual-ized servers can be assigned high, medium, or low priority deliverythrough the fabric. This QoS solution guarantees that essential butlower-priority applications such as tape backup do not overwhelm mis-sion-critical applications such as online transaction processing (OLTP).It also makes it much easier to deploy new applications over time ormigrate existing virtual machines since the QoS priority level of anapplication moderates its consumption of available bandwidth. Whencombined with the high performance and 8 Gbps port speed of Bro-cade HBAs, switches, directors, and backbone platforms, QoS providesan additional means to meet application requirements despite fluctua-tions in aggregate traffic loads.

Because traffic loads vary over time and sudden spikes in workloadcan occur unexpectedly, congestion on a link, particularly between thefabric and a burdened storage port, can occur. Ideally, a flow controlmechanism would enable the fabric to slow the pace of traffic at thesource of the problem, typically a very active server generating anatypical workload. Another Adaptive Networking service, Brocadeingress rate limiting (IRL) proactively monitors the traffic levels on alllinks and, when congestion is sensed on a specific link, identifies the

BrocadeDCX core

QoS Priorities High Medium Low

EdgeswitchesTape

Disk

Servers

The New Data Center 49

Page 69: 87652141 the-new-data-center-brocade

Chapter 5: Weaving a New Data Center Fabric

initiating source. Ingress rate limiting allows the fabric switch to throt-tle the transmission rate of a server to a speed lower than theoriginally negotiated link speed.

Figure 27. Ingress rate limiting enables the fabric to alleviate potential congestion by throttling the transmission rate of the offending initiator.

In the example shown in Figure 27, the Brocade DCX monitors poten-tial congestion on the link to a storage array and proactively reducesthe rate of transmission at the server source. If, for example, theserver HBA had originally negotiated an 8 Gbps transmission ratewhen it initially logged in to the fabric, ingress rate limiting couldreduce the transmission rate to 4 Gbps or lower, depending on the vol-ume of traffic to be reduced to alleviate congestion at the storage port.Thus, without operator intervention, potentially disruptive congestionevents can be resolved proactively, while ensuring continuous opera-tion of all applications.

Brocade's Adaptive Networking services also enable storage adminis-trators to establish preferred paths for specific applications throughthe fabric and the ability to fail over from a preferred path to an alter-nate path if the preferred path is unavailable. This capability isespecially useful for isolating certain applications such as tape backupor disk-to-disk replication to ensure that they always enter or exit onthe same inter-switch link to optimize the data flow and avoid over-whelming other application streams.

BrocadeDCX core

Edgeswitches

Tape

Disk

Servers

Congestion

Rate limit

50 The New Data Center

Page 70: 87652141 the-new-data-center-brocade

Intelligent by Design

Figure 28. Preferred paths are established through traffic isolation zones, which enforce separation of traffic through the fabric based on designated applications.

Figure 28 illustrates a fabric with two primary business applications(ERP and Oracle) and a tape backup segment. In this example, thetape backup preferred path is isolated from the ERP and Oracle data-base paths so that the high volume of traffic generated by backupdoes not interfere with the production applications. Because the pre-ferred path traffic isolation zone also accommodates failover toalternate paths, the storage administrator does not have to intervenemanually if issues arise in a particular isolation zone.

To more easily identify which applications might require specializedtreatment with QoS, rate limiting, or traffic isolation, Brocade has pro-vided a Top Talkers monitor for devices in the fabric. Top Talkersautomatically monitors the traffic pattern on each port to diagnoseover- or under-utilization of port bandwidth.

Backup

ERP

Oracle

The New Data Center 51

Page 71: 87652141 the-new-data-center-brocade

Chapter 5: Weaving a New Data Center Fabric

Figure 29. By monitoring traffic activity on each port, Top Talkers can identify which applications would most benefit from Adaptive Network-ing services.

Applications that generate higher volumes of traffic through the fabricare primary candidates for Adaptive Networking services, as shown inFigure 29. This functionality is especially useful in virtual server envi-ronments, since the deployment of new VMs or migration of VMs fromone platform to another can have unintended consequences. Top Talk-ers can help indicate when a migration might be desirable to benefitfrom higher bandwidth or preferred pathing.

In terms of aligning infrastructure to applications, Top Talkers allowsadministrators to deploy fabric resources where and when they areneeded most. Configuring additional ISLs to create a higher-perfor-mance trunk, for example, might be required for particularly activeapplications, while moderate performance applications could continueto function quite well on conventional links.

Port 14 Port 20 Port 56

OA25

1 3/

1 E1

2D2

OA25

1 3/

1 E1

2D2

OA25

1 3/

1 E1

2D2

OA25

1 3/

1 E1

2D2

OA25

1 3/

1 E1

2D2

OA25

1 3/

1 E1

2D2

OA25

1 3/

1 E1

2D2

OA25

1 3/

1 E1

2D2

OA25

1 3/

1 E1

2D2

OA25

1 3/

1 E1

2D2

OA25

1 3/

1 E1

2D2

OA25

1 3/

1 E1

2D2

52 The New Data Center

Page 72: 87652141 the-new-data-center-brocade

Energy Efficient Fabrics

Energy Efficient FabricsIn the previous era of readily available and relatively cheap energy,data center design focused more on equipment placement and conve-nient access than on the power requirements of the IT infrastructure.Today, many data centers simply cannot obtain additional powercapacity from their utilities or are under severe budget constraints tocover ongoing operational expense. Consequently, data center manag-ers are scrutinizing the power requirements of every hardwareelement and looking for means to reduce the total data center powerbudget. As we have seen, this is a major driver for technologies suchas server virtualization and consolidation of hardware assets acrossthe data center, including storage and storage networking.

The energy consumption of data center storage systems and storagenetworking products has been one of the key focal points of the Stor-age Networking Industry Association (SNIA) in the form of the SNIAGreen Storage Initiative (GSI) and Green Storage Technical WorkingGroup (GS TWG). In January 2009, the SNIA GSI released the SNIAGreen Storage Power Measurement Specification as an initial docu-ment to formulate standards for measuring the energy efficiency ofdifferent classes of storage products. For storage systems, energy effi-ciency can be defined in terms of watts per megabyte of storagecapacity. For fabric elements, energy efficiency can be defined in wattsper gigabytes/second bandwidth. Brocade played a leading role in theformation of the SNIA GSI and participation in the GS TWG and leadsby example in pioneering the most energy-efficient storage fabric prod-ucts in the market.

Achieving the greatest energy efficiency in fabric switches and direc-tors requires a holistic view of product design so that all componentsare optimized for low energy draw. Enterprise switches and directors,for example, are typically provisioned with dual-redundant power sup-plies for high availability. From an energy standpoint, it would bepreferable to operate with only a single power supply, but businessavailability demands redundancy for failover. Consequently, it is criticalto design power supplies that have at least 80% efficiency in convert-ing AC input power into DC output to service switch components.Likewise, the cooling efficiency of fan modules and selection andplacement of discrete components for processing elements and portcards all add to a product design optimized for high performance andlow energy consumption. Typically, for every watt of power consumedfor productive IT processing, another watt is required to cool the equip-

The New Data Center 53

Page 73: 87652141 the-new-data-center-brocade

Chapter 5: Weaving a New Data Center Fabric

ment. Dramatically lowering the energy consumption of fabric switchesand directors therefore has a dual benefit in terms of reducing bothdirect power costs and indirect cooling overhead.

The Brocade DCX achieves an energy efficiency of less than a watt ofpower per gigabit of bandwidth. That is 10x more efficient than compa-rable directors on the market and frees up available power for other ITequipment. To highlight this difference in product design philosophy, inlaboratory tests a fully loaded Brocade director consumed less power(4.6 Amps) than an empty chassis from a competitor (5.1 Amps). Thedifference in energy draw of two comparably configured directorswould be enough to power an entire storage array. Energy efficientswitch and director designs have a multiplier benefit as more ele-ments are added to the SAN. Although the fabric infrastructure as awhole is a small part of the total data center energy budget, it can beleveraged to reduce costs and make better use of available powerresources. As shown in Figure 30, power measurements on an 8 Gbpsport at full speed show the Brocade DCX advantage.

Figure 30. Brocade DCX power consumption at full speed on an 8 Gbps port compared to the competition.

54 The New Data Center

Page 74: 87652141 the-new-data-center-brocade

Safeguarding Storage Data

Safeguarding Storage DataUnfortunately, SAN security has been a back-burner issue for manystorage administrators due in part to several myths about the securityof data centers in general. These myths (listed below) are addressed indetail in Roger Bouchard's Securing Fibre Channel Fabrics (BrocadeBookshelf) and include assumptions about data center physical secu-rity and the difficulty of hacking into Fibre Channel networks andprotocols. Given that most breaches in storage security occur throughoperator error and lost disks or tape cartridges, however, threats tostorage security are typically internal, not external, risks.

The centrality of the fabric in providing both host and storage connec-tivity provides new opportunities for safeguarding storage data. As withother intelligent fabric services, fabric-based security mechanisms canhelp ensure consistent implementation of security policies and theflexibility to apply higher levels of security where they are mostneeded.

SAN Security Myths

• SAN Security Myth #1. SANs are inherently secure since they are in a closed, physically protected environment.

• SAN Security Myth #2. The Fibre Channel protocol is not well known by hackers and there are almost no avenues available to attack FC fabrics.

• SAN Security Myth #3. You can't “sniff” optical fiber without cutting it first and causing disruption.

• SAN Security Myth #4. The SAN is not connected to the Inter-net so there is no risk from outside attackers.

• SAN Security Myth #5. Even if fiber cables could be sniffed, there are so many protocol layers, file systems, and database formats that the data would not be legible in any case.

• SAN Security Myth #6. Even if fiber cables could be sniffed, the amount of data to capture is simply too large to capture realistically and would require expensive equipment to do so.

• SAN Security Myth #7. If the switches already come with built-in security features, why should I be concerned with imple-menting security features in the SAN?

The New Data Center 55

Page 75: 87652141 the-new-data-center-brocade

Chapter 5: Weaving a New Data Center Fabric

Because data on disk or tape is vulnerable to theft or loss, sensitiveinformation is at risk unless the data itself is encrypted. Best practicesfor guarding corporate and customer information consequently man-date full encryption of data as it is written to disk or tape and a securemeans to manage the encryption keys used to encrypt and decrypt thedata. Brocade has developed a fabric-based solution for encryptingdata-at-rest that is available as a blade for the Brocade DCX Backbone(Brocade FS8-18 Encryption Blade) or as a standalone switch (Bro-cade Encryption Switch).

Figure 31. The Brocade Encryption Switch provides secure encryption for disk or tape.

Both the 16-port encryption blade for Brocade DCX and the 32-portencryption switch provide 8 Gbps per port for fabric or device connec-tivity and an aggregate 96 Gbps of hardware-based encryptionthroughput and 48 Gbps of data compression bandwidth. The combi-nation of encryption and data compression enables greater efficiencyin both storing and securing data. For encryption to disk, the IEEEAES256-XTS encryption algorithm facilitates encryption of disk blockswithout increasing the amount of data per block. For encryption totape, the AES256-GCM encryption algorithm appends authenticatingmetadata to each encrypted data block. Because tape devices accom-modate variable block sizes, encryption does not impede backupoperations. From the host standpoint, both encryption processes aretransparent and due to the high performance of the Brocade encryp-tion engine there is no impact on response time.

Servers

Keymanagement

Storage arrays Tape

56 The New Data Center

Page 76: 87652141 the-new-data-center-brocade

Safeguarding Storage Data

As shown in Figure 31, the Brocade Encryption Switch supports bothfabric attachment and end-device connectivity. Within both the encryp-tion blade and switch, virtual targets are presented to the hosts andvirtual initiators are presented to the downstream storage array ortape subsystem ports. Frame redirection, a Fabric OS technology, isused to forward traffic to the encryption device for encryption on datawrites and decryption on data reads. In the case of direct deviceattachment (for example, the tape device connected to the encryptionswitch in Figure 31), the encrypted data is simply switched to theappropriate port.

Because no additional middleware is required for hosts or storagedevices, this solution easily integrates into existing fabrics and canprovide a much higher level of data security with minimal reconfigura-tion. Key management for safeguarding and authenticating encryptionkeys is provided via Ethernet connection to the Brocade encryptiondevice.

In addition to data encryption for disk and tape, fabric-based securityincludes features for protecting the integrity of fabric connectivity andsafeguarding management interfaces. Brocade switches and directorsuse access control lists (ACLs) to allow access to the fabric for onlyauthorized switches and end devices. Based on the port or device'sWWN, Switch Connection Control (SCC) and Device Connection Control(DCC) prevent the intentional or accidental connection of a new switchor device that would potentially pose a security threat, as shown inFigure 32. Once configured, the fabric is essentially locked down toprevent unauthorized access until the administrator specificallydefines a new connection. Although this requires additional manage-

Brocade Key Management Solutions

• NetApp KM500 Lifetime Key Management (LKM) Appliance

• EMC RSA Key Manager (RKM) Server Appliance

• HP StorageWorks Secure Key Manager (SKM)

• Thales Encryption Manager for Storage

For more about these key management solutions, visit the BrocadeEncryption Switch product page on www.brocade.com and find theTechnical Briefs section a the bottom of the page.

The New Data Center 57

Page 77: 87652141 the-new-data-center-brocade

Chapter 5: Weaving a New Data Center Fabric

ment intervention, it precludes disruptive fabric reconfigurations andsecurity breaches that could otherwise occur through deliberate actionor operator error.

Figure 32. Using fabric ACLs to secure switch and device connectivity.

For securing storage data-in-flight, Brocade also provides hardware-based encryption on its 8 Gbps HBAs and the Brocade 7800 ExtensionSwitch and FX8-24 Extension Blade products. In high-security environ-ments, meeting regulatory compliance standards can requireencrypting all data along the entire data path from host to the primarystorage target as well as secondary storage in disaster recovery sce-narios. This capability is now available across the entire fabric with noimpact to fabric performance or availability.

Multi-protocol Data Center FabricsData center best practices have historically prescribed the separationof networks according to function. Creating a dedicated storage areanetwork, for example, ensures that storage traffic is unimpeded by themore erratic traffic patterns typical of messaging or data communica-tions networks. In part, this separation was facilitated by the fact thatnearly all storage networks used a unique protocol and transport, thatis, Fibre Channel, while LANs are almost universally based on Ether-net. This situation changed somewhat with the introduction of iSCSI fortransporting SCSI block data over conventional Ethernet and TCP/IP,although most iSCSI vendors still recommend building a dedicated IPstorage network for iSCSI hosts and storage.

XUnauthorized

device

58 The New Data Center

Page 78: 87652141 the-new-data-center-brocade

Multi-protocol Data Center Fabrics

Fibre Channel continues to be the protocol of choice for high-perfor-mance, highly available SANs. There are several reasons for this,including the ready availability of diverse Fibre Channel products andthe continued evolution of the technology to higher speeds and richerfunctionality over time. Still, although nearly all data centers worldwiderun their most mission-critical applications on Fibre Channel SANs,many data centers also house hundreds or thousands of moderate-performance standalone servers with legacy DAS. It is difficult to costjustify installation of Fibre Channel HBAs into low-cost servers if thecost of storage connectivity exceeds the cost of the server itself.

iSCSI has found its niche market primarily in cost-sensitive small andmedium business (SMB) environments. It offers the advantage of low-cost per-server connectivity since iSCSI device drivers are readily avail-able for a variety of operating systems at no cost and can be run overconventional Ethernet or (preferably) Gigabit Ethernet interfaces. TheIP SAN switched infrastructure can be built with off-the-shelf, low-costEthernet switches. And various storage system vendors offer iSCSIinterfaces for mid-range storage systems and tape backup subsys-tems. Of course, Gigabit Ethernet does not have the performance of 4or 8 Gbps Fibre Channel, but for mid-tier applications Gigabit Ethernetmay be sufficient and the total cost for implementing shared storage isvery reasonable, even when compared to direct-attached SCSIstorage.

Using iSCSI to transition from direct-attached to shared storage yieldsmost of the benefits associated with traditional SANs. Using iSCSI con-nectivity, servers are no longer the exclusive “owner” of their own(direct-attached) storage, but can share storage systems over the stor-age network. If a particular server fails, alternate servers can bind tothe failed server's LUNs and continue operation. As with conventionalFibre Channel SANs, adding storage capacity to the network is no lon-ger disruptive and can be performed on the fly. In terms ofmanagement overhead, the greatest benefit of converting from direct-attached to shared storage is the ability to centralize backup opera-tions. Instead of backing up individual standalone servers, backup cannow be performed across the IP SAN without disrupting client access.In addition, features such as iSCSI SAN boot can simplify serveradministration by centralizing management of boot images instead oftouching hundreds of individual servers.

The New Data Center 59

Page 79: 87652141 the-new-data-center-brocade

Chapter 5: Weaving a New Data Center Fabric

One significant drawback of iSCSI, however, is that by using commodityEthernet switches for the IP SAN infrastructure, none of the storage-specific features built into Fibre Channel fabric switches are available.Fabric login services, automatic address assignment, simple nameserver (SNS) registration, device discovery, zoning, and other storageservices are simply unavailable in conventional Ethernet switches.Consequently, although small iSCSI deployments can be configuredmanually to ensure proper assignment of servers to their storageLUNs, iSCSI is difficult to manage when scaled to larger deployments.In addition, because Ethernet switches are indifferent to the upper-layer IP protocols they carry, it is more difficult to diagnose storage-related problems that might arise. iSCSI standards do include theInternet Simple Name Server (iSNS) protocol for device authenticationand discovery, but iSNS must be supplied as a third-party add-on tothe IP SAN.

Collectively, these factors overshadow the performance differencebetween Gigabit Ethernet and 8 Gbps Fibre Channel. Performancebecomes less of an issue when iSCSI is run over 10 Gigabit Ethernet,but that typically requires a specialized iSCSI network interface card(NIC) with TCP offload, Serial RDMA for iSCSI (iSER), 10 GbE switches,and 10 GbE storage ports. The cost advantage of iSCSI at 1 GbE istherefore quickly undermined when iSCSI attempts to achieve the per-formance levels common to Fibre Channel. Even with these additionalcosts, the basic fabric and storage services embedded in Fibre Chan-nel switches are still unavailable.

For data center applications, however, low-cost iSCSI running overstandard Gigabit Ethernet does make sense when standalone DASservers are integrated into existing Fibre Channel SAN infrastructuresvia gateway products with iSCSI-to-Fibre Channel protocol conversion.The Brocade FC4-16IP iSCSI Blade for the Brocade 48000 Director, forexample, can aggregate hundreds of iSCSI-based servers for connec-tivity into an existing SAN, as shown in Figure 33. This enablesformerly standalone low-cost servers to enjoy the benefits of sharedstorage while advanced storage services are supplied by the fabricitself. Simplifying tape backup operations in itself is often sufficientcost justification for iSCSI integration via gateways and if free iSCSIdevice drivers are used, the per-server connectivity cost is negligible.

60 The New Data Center

Page 80: 87652141 the-new-data-center-brocade

Multi-protocol Data Center Fabrics

Figure 33. Integrating formerly standalone mid-tier servers into the data center fabric with an iSCSI blade in the Brocade DCX.

As discussed in Chapter 3, FCoE is another multi-protocol option forintegrating new servers into existing data center fabrics. Unlike theiSCSI protocol which uses Layer 3 IP routing and TCP for packet recov-ery, FCoE operates at Layer 2 switching and relies on Fibre Channelprotocols for recovery. FCoE is therefore much closer to native FibreChannel in terms of protocol overhead and performance but doesrequire an additional level of frame encapsulation and decapsulationfor transport over Ethernet. Another dissimilarity to iSCSI is that FCoErequires a specialized host adapter card, a CNA that supports FCoEand 10 Gbps Data Center Bridging. In fact, to replicate the flow controland deterministic performance of native Fibre Channel, Ethernetswitches between the host and target must be DCB capable. FCoEtherefore does not have the obvious cost advantage of iSCSI but doesoffer a comparable means to simplify cabling by reducing the numberof server connections needed to carry both messaging and storagetraffic.

Although FCoE is being aggressively promoted by some network ven-dors, the cost/benefit advantage has yet to be demonstrated inpractice. In current economic conditions, many customers are hesitantto adopt new technologies that have no proven track record or viableROI. Although Brocade has developed both CNA adapters and FCoEswitch products for customers who are ready to deploy them, the mar-ket will determine if simplifying server connectivity is sufficient cost

FC tape

GbEswitches

FC storage arrays

FC servers

Brocade directorwith iSCSI blade

Rack-mount1U servers

The New Data Center 61

Page 81: 87652141 the-new-data-center-brocade

Chapter 5: Weaving a New Data Center Fabric

justification for FCoE adoption. At the point when 10 Gbps DCB-enabled switches and CNA technology become commoditized, FCoEwill certainly become an attractive option.

Other enhanced solutions for data center fabrics include Fibre Chan-nel over IP (FCIP) for SAN extension, Virtual Fabrics (VF), andIntegrated Routing (IR). As discussed in the next section on disasterrecovery, FCIP is used to extend Fibre Channel over conventional IPnetworks for remote data replication or remote tape backup. VirtualFabrics protocols enable a single complex fabric to be subdivided intoseparate virtual SANs in order to segregate different applications andprotect against fabric-wide disruptions. IR SAN routing protocolsenable connectivity between two or more independent SANs forresource sharing without creating one large flat network.

Figure 34. Using Virtual Fabrics to isolate applications and minimize fabric-wide disruptions.

Virtual Fabric 1 Virtual Fabric 2

Virtual Fabric 2

Virtual Fabric 3

62 The New Data Center

Page 82: 87652141 the-new-data-center-brocade

Multi-protocol Data Center Fabrics

As shown in Figure 34, Virtual Fabrics is used to divide a single physi-cal SAN into multiple logical SANs. Each virtual fabric behaves as aseparate fabric entity, logical fabric, with its own simple name server(SNS) and registered state change notification (RSCN) Brocadedomain. Logical fabrics can span multiple switches, providing greaterflexibility in how servers and storage within a logical fabric can bedeployed. To isolate frame routing between the logical fabrics, VF tag-ging headers are applied to the appropriate frames as they are issued.The headers are then removed by the destination switch before theframes are sent on to the appropriate initiator or target. Theoretically,the VF tagging header would allow for 4096 logical fabrics on a singlephysical SAN configuration, although in practice only a few are typicallyused.

Virtual Fabrics is a means to consolidate SAN assets while enforcingmanaged units of SAN. In the example shown in Figure 33, for exam-ple, each of the three Logical Fabrics could be administered by aseparate department with different storage, security, and bill-back pol-icies. Although the total SAN configuration may be quite large, thedivision into separately managed logical fabrics simplifies administra-tion while leveraging the data center's investment in SAN technology.Brocade Fabric OS supports Virtual Fabrics across Brocade switches,director, and backbone platforms.

Where Virtual Fabrics technology can be used to isolate resources onthe same physical fabric, Integrated Routing (IR) is used to shareresources between separate physical fabrics. Without IR, connectingtwo or more fabrics together would create a large flat network, analo-gous to bridging in LAN environments. Creating very large fabrics,however, can lead to much greater complexity in management and vul-nerability to fabric-wide disruptions.

The New Data Center 63

Page 83: 87652141 the-new-data-center-brocade

Chapter 5: Weaving a New Data Center Fabric

Figure 35. IR facilitates resource sharing between physically indepen-dent SANs.

As shown in Figure 35, IR SAN routers provide both connectivity andfault isolation between separate SANs. In this example, a server onSAN A can access a storage array on SAN B (dashed line) via the SANrouter. From the perspective of the server, the storage array is a localresource on SAN A. The SAN router performs network address transla-tion to proxy the appearance of the storage array and to conform to theaddress space of each SAN. Because each SAN is autonomous, fabricreconfigurations or RSCN broadcasts on one SAN will not adverselyimpact the others. Brocade products such as the Brocade 7800 Exten-sion Switch and FX4-24 Extension Blade for the Brocade DCXBackbone provide routing capability for non-disruptive resource shar-ing between independent SANs.

Fabric-based Disaster RecoveryDeploying new technologies to achieve greater energy efficiency, hard-ware consolidation, and more intelligence in the data center fabriccannot ensure data availability if the data center itself is vulnerable todisruption or outage. Although data center facilities may be designedto withstand seismic or catastrophic weather events, a major disrup-tion can result in prolonged outages that put business or the viabilityof the enterprise at risk. Consequently, most data centers have somedegree of disaster recovery planning that provides either instanta-

SAN B

SAN C

SAN A

IR SANrouter

64 The New Data Center

Page 84: 87652141 the-new-data-center-brocade

Fabric-based Disaster Recovery

neous failover to an alternate site or recovery within acceptable timeframes for business resumption. Fortunately, disaster recovery tech-nology has improved significantly in recent years and now enablescompanies to implement more economical disaster recovery solutionsthat do not burden the data center with excessive costs oradministration.

Disaster recovery planning today is bounded by tighter budget con-straints and conventional recovery point and recovery time (RPO/RTO)objectives. In addition, more recent examples of region-wide disrup-tions (for example, Northeast power blackouts and hurricanes Katrinaand Rita in the US) have raised concerns over how far away a recoverysite must be to ensure reliable failover. The distance between primaryand failover sites is also affected by the type of data protectionrequired. Synchronous disk-to-disk data replication, for example, is lim-ited to metropolitan distances, typically 100 miles or less.Synchronous data replication ensures that every transaction is safelyduplicated to a remote location, but the distance may not be sufficientto protect against regional events. Asynchronous data replication buf-fers multiple transactions before transmission, and so may miss themost recent transaction if a failure occurs. It does, however, tolerateextremely long-distance replication and is currently deployed for disas-ter recovery installations that span transoceanic and transcontinentaldistances.

Both synchronous and asynchronous data replication over distancerequire some kind of wide area service such as metro dark fiber,dense wavelength division multiplexing (DWDM), Synchronous OpticalNetworking (SONET), or IP network—and the recurring monthly cost ofWAN links is typically the most expensive operational cost in a disasterrecovery implementation. To connect primary and secondary data cen-ter SANs efficiently, then, requires technology to optimize use of widearea links in order to transmit more data in less time and the flexibilityto deploy long-distance replication over the most cost-effective WANlinks appropriate for the application.

The New Data Center 65

Page 85: 87652141 the-new-data-center-brocade

Chapter 5: Weaving a New Data Center Fabric

Achieving maximum utilization of metro or wide area links is facilitatedby combining several technologies, including high-speed bandwidth,port buffers, data compression, rate limiting, and specialized algo-rithms such as SCSI write acceleration and tape pipelining. Formetropolitan distances suitable for synchronous disk-to-disk data rep-lication, for example, native Fibre Channel extension can beimplemented up to 218 miles at 8 Gbps using Brocade 8 Gbps portcards in the Brocade 48000 or Brocade DCX. While the distance sup-ported is more than adequate for synchronous applications, the 8Gbps bandwidth ensures maximum utilization of dark fiber or MANservices. In order to avoid credit starvation at high speeds, Brocadeswitch architecture allocates additional port buffers for continuousperformance. Even longer distances for native Fibre Channel transportare possible at lower port speeds.

Commonly available IP network links are typically used for long-dis-tance asynchronous data replication. Fibre Channel over IP enablesFibre Channel-originated traffic to pass over conventional IP infrastruc-tures via frame encapsulation of Fibre Channel within TCP/IP. FCIP isnow used for disaster recovery solutions that span thousands of milesand because it uses standard IP services is more economical thanother WAN transports. Brocade has developed auxiliary technologiesto achieve even higher performance over IP networks. Data compres-sion, for example, can provide a 5x or greater increase in link capacityand so enable slower WAN links to carry more useful traffic. A 45Megabits per second (Mbps) T3 WAN link typically provides about 4.5Megabytes per second (MBps) of data throughput. By using data com-pression, the throughput can be increased to 25 MBps. This isequivalent to using far more expensive 155 Mbps OC3 WAN links toachieve the same data throughput.

Likewise, significant performance improvements over conventional IPnetworks can be achieved with Brocade FastWrite acceleration andtape pipelining algorithms. These features dramatically reduce theprotocol overhead that would otherwise occupy WAN bandwidth andenable much faster data transfers on a given link speed. BrocadeFICON acceleration provides comparable functionality for mainframeenvironments. Collectively, these features achieve the objectives ofmaximizing utilization of expensive WAN services, while ensuring dataintegrity for disaster recovery and remote replication applications.

66 The New Data Center

Page 86: 87652141 the-new-data-center-brocade

Fabric-based Disaster Recovery

Figure 36. Long-distance connectivity options using Brocade devices.

As shown in Figure 36, Brocade DCX and SAN extension products offera variety of ways to implement long-distance SAN connectivity fordisaster recovery and other remote implementations. For synchronousdisk-to-disk data replication within a metropolitan circumference,native Fibre Channel at 8 Gbps or 10 Gbps can be driven directly fromBrocade DCX ports over dark fiber or DWDM. For asynchronous repli-cation over hundreds or thousands of miles, the Brocade 7800 andFX8-24 extension platforms covert native Fibre Channel to FCIP fortransport over conventional IP network infrastructures. These solu-tions provide flexible options for storage architects to deploy the mostappropriate form of data protection based on specific applicationneeds. Many large data centers use a combination of extension tech-nologies to provide both synchronous replication within metroboundaries to capture every transaction and asynchronous FCIP-based extension to more distant recovery sites as a safeguard againstregional disruptions.

DWDM

Brocade7800

Brocade7800

IP

BrocadeFX8-24 in DCX

IP

BrocadeFX8-24 in DCX

BrocadeDCX

BrocadeDCX

The New Data Center 67

Page 87: 87652141 the-new-data-center-brocade

Chapter 5: Weaving a New Data Center Fabric

68 The New Data Center

Page 88: 87652141 the-new-data-center-brocade

The New Data Center

6

The New Data Center LANBuilding a cost-effective, energy-efficient, high-performance, and intelligent network

Just as data center fabrics bind application servers to storage, thedata center Ethernet network brings server resources and processingpower to clients. Although the fundamental principles of data centernetwork design have not changed significantly, the network is underincreasing pressure to serve more complex and varied client needs.According to the International Data Corporation (IDC), for example, thegrowth of non-PC client data access is five times greater than that ofconventional PC-based users as shown by the rapid proliferation ofPDAs, smart phones, and other mobile and wireless devices. Thischange applies to traditional in-house clients as well as external cus-tomers and puts additional pressure on both corporate intranet andInternet network access.

Bandwidth is also becoming an issue. The convergence of voice, video,graphics, and data over a common infrastructure is a driving forcebehind the shift from 1 GbE to 10 GbE in most data centers. Rich con-tent is not simply a roadside attraction for modern business but anecessary competitive advantage for attracting and retaining custom-ers. Use of multi-core processors in server platforms increases theprocessing power and reduces the number of requisite connectionsper platform, but also requires more raw bandwidth per connection.Server virtualization is having the same effect. If 20 virtual machinesare now sharing the same physical network port previously occupiedby one physical machine, the port speed must necessarily beincreased to accommodate the potential 20x increase in clientrequests.

Server virtualization's dense compute environment is also driving portdensity in the network interconnect, especially when virtualization isinstalled on blade servers. Physical consolidation of network connec-

69

Page 89: 87652141 the-new-data-center-brocade

Chapter 6: The New Data Center LAN

tivity is important for both rationalizing the cable plant and in providingflexibility to accommodate mobility of VMs as applications aremigrated from one platform to another. Where previously server net-work access was adequately served by 1 Gbps ports, top-of-rackaccess layer switches now must provide compact connectivity at 10Gbps. This, in turn, requires more high-speed ports at the aggregationand core layers to accommodate higher traffic volumes.

Other trends such as software as a service (SaaS) and Web-basedbusiness applications are shifting the burden of data processing fromremote or branch clients back to the data center. To maintain accept-able response times and ensure equitable service to multipleconcurrent clients, preprocessing of data flows helps offload serverCPU cycles and provides higher availability. Application layer (Layer 4–7) networking is therefore gaining traction as a means to balanceworkloads and offload networking protocol processing. By acceleratingapplication access, more transactions can be handled in less time andwith less congestion at the server front-end. Web-based applicationsin particular benefit from a network-based hardware assist to ensurereliability and availability to internal and external users.

Even with server consolidation, blade frames, and virtualization, serv-ers collectively still account for the majority of data center power andcooling requirements. Network infrastructure, however, still incurs asignificant power and cooling overhead and data center managers arenow evaluating power consumption as one of the key criteria in net-work equipment selection. In addition, data center floor space is at apremium and more compact, higher-port-density network switches cansave valuable real estate.

Another cost-cutting trend for large enterprises is the consolidation ofmultiple data centers to one or just a few larger regional data centers.Such large-scale consolidation typically involves construction of newfacilities that can leverage state-of-the-art energy efficiencies such assolar power, air economizers, fly-wheel technology, and hot/cold aislefloor plans (see Figure 3 on page 11). The selection of new IT equip-ment is also an essential factor in maximizing the benefit ofconsolidation, maintaining availability, and reducing ongoing opera-tional expense. Since the new data center network infrastructure mustnow support client traffic that was previously distributed over multipledata centers, deploying a high-performance LAN with advanced appli-cation support is crucial for a successful consolidation strategy. Inaddition, the reduction of available data centers increases the needfor security throughout the network infrastructure to ensure data integ-rity and application availability.

70 The New Data Center

Page 90: 87652141 the-new-data-center-brocade

A Layered Architecture

A Layered ArchitectureWith tens of thousands of installations worldwide, data center net-works have evolved into a common infrastructure built on multiplelayers of connectivity. The three fundamental layers common to nearlyall data center networks are the access, aggregation, and core layers.This basic architecture has proven to be the most suitable for providingflexibility, high performance, and resiliency and can be scaled frommoderate to very large infrastructures.

Figure 37. Access, aggregation, and core layers in the data center network.

As shown in Figure 37, the conventional three-layer network architec-ture provides a hierarchy of connectivity that enable servers tocommunicate with each other (for cluster and HPC environments) andwith external clients. Typically, higher bandwidth is provided at theaggregation and core layers to accommodate the high volume ofaccess layer inputs, although high-performance applications may also

Aggregation

Access

Core

Mission-criticalapplication servers

General-purposeapplication servers

Externalnetwork

The New Data Center 71

Page 91: 87652141 the-new-data-center-brocade

Chapter 6: The New Data Center LAN

require 10 Gbps links. Scalability is achieved by adding more switchesat the requisite layers as the population of physical or virtual serversand volume of traffic increases over time.

The access layer provides the direct network connection to applicationand file servers. Servers are typically provisioned with two or more GbEor 10 GbE network ports for redundant connectivity. Server platformsvary from standalone servers to 1U rack-mount servers and bladeservers with passthrough cabling or bladed Ethernet switches. Accesslayer switches typically provide basic Layer 2 (MAC-based) and Layer 3(IP-based) switching for server connectivity and often have higherspeed 10 GbE uplink ports to consolidate connectivity to the aggrega-tion layer.

Because servers represent the highest population of platforms in thedata center, the access layer functions as the fan-in point to join manydedicated network connections to fewer but higher-speed shared con-nections. Unless designed otherwise, the access layer is thereforetypically oversubscribed in a 6:1 or higher ratio of server network portsto uplink ports. In Figure 37 on page 71, for example, the mission-criti-cal servers could be provisioned with 10 GbE network interfaces and a1:1 ratio for uplink. The general purpose servers, by contrast, would beadequately supported with 1 GbE network ports and a 6:1 or higheroversubscription ratio.

Access layer switches are available in a variety of port densities andcan be deployed for optimal cabling and maintenance. Options forswitch placement range from top of rack to middle of rack, middle ofrow, and end of row. As illustrated in Figure 38, top-of-rack accesslayer switches are typically deployed in redundant pairs with cablingrun to each racked server. This is a common configuration for mediumand small server farms and enables each rack to be managed as a sin-gle entity. A middle-of-rack configuration is similar but with multiple 1Uswitches deployed throughout the stack to further simplify cabling.

For high-availability environments, however, larger switches withredundant power supplies and switch modules can be positioned inmiddle-of-row or end-of-row configurations. In these deployments, mid-dle-of-row placement facilitates shorter cable runs, while end-of-rowplacement requires longer cable runs to the most distant racks. Ineither case, high-availability network access is enabled by the hard-ened architecture of HA access switches.

72 The New Data Center

Page 92: 87652141 the-new-data-center-brocade

A Layered Architecture

Figure 38. Access layer switch placement is determined by availability, port density, and cable strategy.

Examples of top-of-rack access solutions include Brocade FastIronEdge Series switches. Because different applications can have differ-ent performance and availability requirements, these access switchesoffer multiple connectivity options (10, 100, or 1000 Mbps and 10Gbps) and redundant features. Within the data center, the accesslayer typically supports application servers but can also be used tosupport in-house client workstations. In conventional use, the datacenter access layer supports servers and clients and workstations areconnected at the network edge.

In addition to scalable server connectivity, upstream links to the aggre-gation layer can be optimized for high availability in metropolitan areanetworks (MANs) through value-added features such as the MetroRing Protocol (MRP) and Virtual Switch Redundancy Protocol (VSRP).As discussed in more detail later in this chapter, these featuresreplace conventional Spanning Tree Protocol (STP) for metro and cam-pus environments with a much faster, sub-second recovery time forfailed links.

For modern data centers, access layer services can also include powerover Ethernet (PoE) to support voice over IP (VoIP) telecommunicationssystems and wireless access points for in-house clients as well assecurity monitoring. The ability to provide both data and power overEthernet greatly simplifies the wiring infrastructure and facilitatesresource management.

Top of rack(ToR)

Middle of row(MoR)

End of row(EoR)

The New Data Center 73

Page 93: 87652141 the-new-data-center-brocade

Chapter 6: The New Data Center LAN

At the aggregation layer, uplinks from multiple access-layer switchesare further consolidated into fewer high-availability and high-perfor-mance switches, which provide advanced routing functions andupstream connectivity to the core layer. Examples of aggregation-layerswitches include the Brocade BigIron RX Series (with up to 5.12 Tbpsswitching capacity) with Layer 2 and Layer 3 switching and the Bro-cade ServerIron ADX series with Layer 4–7 application switching.Because the aggregation layer must support the traffic flows of poten-tially thousands of downstream servers, performance and availabilityare absolutely critical.

As the name implies, the network core is the nucleus of the data cen-ter LAN and provides the top-layer switching between all devicesconnected via the aggregation and access layers. In a classic three-tiermodel, the core also provides connectivity to the external corporatenetwork, intranet, and Internet. In addition to high-performance 10Gbps Ethernet ports, core switches can be provisioned with OC-12 orhigher WAN interfaces. Examples of network core switches include theBrocade NetIron MLX Series switches with up to 7.68 Tbps switchingcapacity. These enterprise-class switches provide high availability andfault tolerance to ensure reliable data access.

Consolidating Network TiersThe access/aggregation/core architecture is not a rigid blueprint fordata center networking. Although it is possible to attach serversdirectly to the core or aggregation layer, there are some advantages tomaintaining distinct connectivity tiers. Layer 2 domains, for example,can be managed with a separate access layer linked through aggrega-tion points. In addition, advanced service options available foraggregation-class switches can be shared by more downstreamdevices connected to standard access switches. A three-tier architec-ture also provides flexibility in selectively deploying bandwidth andservices that align with specific application requirements.

With products such as the Brocade BigIron RX Series switches, how-ever, it is possible to collapse the functionality of a conventional multi-tier architecture into a smaller footprint. By providing support for 768 x1 Gbps downstream ports and 64 x 10 Gbps upstream ports, consoli-dation of port connectivity can be achieved with an accompanyingreduction in power draw and cooling overhead compared to a standardmulti-switch design, as shown in Figure 39.

74 The New Data Center

Page 94: 87652141 the-new-data-center-brocade

Design Considerations

Figure 39. A Brocade BigIron RX Series switch consolidates connectiv-ity in a more energy efficient footprint.

In this example, Layer 2 domains are segregated via VLANs andadvanced aggregation-level services can be integrated directly into theBigIron chassis. In addition, different-speed port cards can be provi-sioned to accommodate both moderate- and high-performancerequirements, with up to 512 x 10 Gbps ports per chassis. For moderndata center networks, the advantage of centralizing connectivity andmanagement is complemented by reduced power consumption andconsolidation of rack space.

Design ConsiderationsAlthough each network tier has unique functional requirements, theentire data center LAN must provide high availability, high perfor-mance, security for data flows, and visibility for management. Properproduct selection and interoperability between tiers is therefore essen-tial for building a resilient data center network infrastructure thatenables maximum utilization of resources while minimizing opera-tional expense. A properly designed network infrastructure, in turn, is afoundation layer for building higher-level network services to automatedata transport processes such as network resource allocation and pro-active network management.

Consolidate to Accommodate GrowthOne of the advantages of a tiered data center LAN infrastructure isthat it can be expanded to accommodate growth of servers and clientsby adding more switches at the appropriate layers. Unfortunately, thisfrequently results in the spontaneous acquisition of more and moreequipment over time as network managers react to increasingdemand. At some point the sheer number of network devices makesthe network difficult to manage and troubleshoot, increases the com-plexity of the cable plant, and invariably introduces congestion pointsthat degrade network performance.

64 x 10 Gbps ports

768 x 1 Gbps ports

The New Data Center 75

Page 95: 87652141 the-new-data-center-brocade

Chapter 6: The New Data Center LAN

Network consolidation via larger, higher-port-density switches can helpresolve space and cooling issues in the data center, and it can alsofacilitate planning for growth. Brocade BigIron RX Series switches, forexample, are designed to scale from moderate to high port-countrequirements in a single chassis for both access and aggregation layerdeployment (greater than 1500 x 1 Gbps or 512 x 10 Gbps ports).Increased port density alone, however, is not sufficient to accommo-date growth if increasing the port count results in degradedperformance on each port. Consequently, BigIron RX Series switchesare engineered to support over 5 Tbps aggregate bandwidth to ensurethat even fully loaded configurations deliver wire-speed throughput.

From a management standpoint, network consolidation significantlyreduces the number of elements to configure and monitor and stream-lines microcode upgrades. A large multi-slot chassis that replaces 10discrete switches, for example, simplifies the network managementmap and makes it much easier to identify traffic flows through the net-work infrastructure.

Network ResiliencyEarly proprietary data communications networks based on SNA and3270 protocols were predicated on high availability for remote useraccess to centralized mainframe applications. IP networking, by con-trast, was originally a best-effort delivery mechanism designed tofunction in potentially congested or lossy infrastructures (for exampledisruption due to nuclear exchange). Now that IP networking is themainstream mechanism for virtually all business transactions world-wide, high availability is absolutely essential for day-to-day operationsand best-effort delivery is no longer acceptable.

Network resiliency has two major components: the high availabilityarchitecture of individual switches and the high availability design of amulti-switch network. For the former, redundant power supplies, fanmodules, and switching blades ensure that an individual unit can with-stand component failures. For the latter, redundant pathing throughthe network using failover links and routing protocols ensures that theloss of an individual switch or link will not result in loss of data access.

Resilient routing protocols such as Virtual Router Redundancy Protocol(VRRP) as defined in RFC 3768 provide a standards-based mechanismto ensure high availability access to a network subnet even if a primaryrouter or path fails. Multiple routers can be configured as a single vir-tual router. If a master router fails, a backup router automaticallyassumes the routing task for continued service, typically within 3 sec-onds of failure detection. VRRP Extension (VRRPE) is an extension of

76 The New Data Center

Page 96: 87652141 the-new-data-center-brocade

Design Considerations

VRRP that uses Bidirectional Forwarding Detection (BFD) to shrink thefailover window to about 1 second. Because networks now carry morelatency-sensitive protocols such as voice over IP, failover must be per-formed as quickly as possible to ensure uninterrupted access.

Timing can also be critical for Layer 2 network segments. At Layer 2,resiliency is enabled by the Rapid Spanning Tree Protocol (RSTP).Spanning tree allows redundant pathing through the network while dis-abling redundant links to prevent multiple loops. If a primary link fails,conventional STP can identify the failure and enable a standby linkwithin 30 to 50 seconds. RSTP decreases the failover window to about1 second. Innovative protocols such as Brocade Virtual Switch Redun-dancy Protocol (VSRP) and Metro Ring Protocol (MRP), however, canaccelerate the failover process to a sub-second response time. In addi-tion to enhanced resiliency, VSRP enables more efficient use ofnetwork resources by allowing a link that is in standby or blockedmode for one VLAN to be active for another VLAN.

Network SecurityData center network administrators must now assume that their net-works are under a constant threat of attack from both internal andexternal sources. Attack mechanisms such as denial of service (DoS)are today well understood and typically blocked by a combination ofaccess control lists (ACLs) and rate limiting algorithms that preventpacket flooding. Brocade, for example, provides enhanced hardware-based, wire-speed ACL processing to block DoS and the more sinisterdistributed DoS (DDoS) attacks. Unfortunately, hackers are constantlycreating new means to penetrate or disable corporate and governmentnetworks and network security requires more than the deployment ofconventional firewalls.

Continuous traffic analysis to monitor the behavior of hosts is onemeans to guard against intrusion. The sFlow (RFC 3176) standarddefines a process for sampling network traffic at wire speed withoutimpacting network performance. Packet sampling is performed inhardware by switches and routers in the network and samples are for-ward to a central sFlow server or collector for analysis. Abnormal trafficpatterns or host behavior can then be identified and proactivelyresponded to in real time. Brocade IronView Network Manager (INM),for example, incorporates sFlow for continuous monitoring of the net-work in addition to ACL and rate limiting management of networkelements.

The New Data Center 77

Page 97: 87652141 the-new-data-center-brocade

Chapter 6: The New Data Center LAN

Other security considerations include IP address spoofing and networksegmentation. Unicast Reverse Path Forwarding (uRPF) as defined inRFC 3704 provides a means to block packets from sources that havenot been already registered in a router's routing information base (RIB)or forwarding information base (FIB). Address spoofing is typically usedto disguise the source of DoS attacks, so uRPF is a further defenseagainst attempts to overwhelm network routers. Another spoofing haz-ard is Address Resolution Protocol (ARP) spoofing, which attempts toassociate an attacker's MAC address with a valid user IP address tosniff or modify data between legitimate hosts. ARP spoofing can bethwarted via ARP inspection or monitoring of ARP requests to ensurethat only valid queries are allowed.

For very large data center networks, risks to the network as a wholecan be reduced by segmenting the network through use of VirtualRouting and Forwarding (VRF). VRF is implemented by enabling arouter with multiple independent instances of routing tables and thisessentially turns a single router into multiple virtual routers. A singlephysical network can thus be subdivided into multiple virtual networkswith traffic isolation between designated departments or applications.

Brocade switches and routers provide an entire suite of security proto-cols and services to protect the data center network and maintainstable operation and management.

Power, Space and Cooling EfficiencyAccording to The Server and StorageIO Group, IT consultants, networkinfrastructure contributes only from 10% to 15% of IT equipmentpower consumption in the data center, as shown in Figure 40. Com-pared to server power consumption at 48%, 15% may not seem asignificant number, but considering that a typical data center canspend close to a million dollars per year on power, the energy effi-ciency of every piece of IT equipment represents a potential savings.Closer cooperation between data center administrators and the facili-ties management responsible for the power bill can lead to a closerexamination of the power draw and cooling requirements of networkequipment and selection of products that provide both performanceand availability as well as lower energy consumption. Especially fornetworking products, there can be a wide disparity between vendorswho have integrated energy efficiency into their product design philos-ophy and those who have not.

78 The New Data Center

Page 98: 87652141 the-new-data-center-brocade

Design Considerations

Figure 40. Network infrastructure typically contributes only 10% to 15% of total data center IT equipment power usage.

Designing for data center energy efficiency includes product selectionthat provides the highest productivity with the least energy footprint.Use of high-port-density switches, for example, can reduce the totalnumber of power supplies, fans, and other components that wouldotherwise be deployed if smaller switches were used. Combiningaccess and aggregation layers with a BigIron RX Series switch likewisereduces the total number of elements required to support host con-nectivity. Selecting larger end-of-row access-layer switches instead ofindividual top-of-rack switches has a similar affect.

The increased energy efficiency of these network design options, how-ever, still ultimately depends on how the vendor has incorporatedenergy saving components into the product architecture. As with SANproducts such as the Brocade DCX Backbone, Brocade LAN solutionsare engineered for energy efficiency and consume less than a fourthof the power of competing products in comparable classes ofequipment.

Network VirtualizationThe networking complement to server virtualization is a suite of virtu-alization protocols that enable extended sections of a shared multi-switch network to function as independent LANs (VLANs) or for a singleswitch to operate as multiple virtual switches (virtual routing and for-warding (VRF) as discussed earlier). In addition, protocols such asvirtual IPs (VIPs) can be used to extend virtual domains between datacenters or multiple sites over distance. As with server virtualization,the intention of network virtualization is to maximize productive use ofexisting infrastructure to reinforce traffic separation, security, availabil-

Cooling/HVAC

IT equipment

OtherOther

Network(SAN/LAN/WAN)

Storage(disk and tape)

External storage(all tiers)

Tape drivelibrary

Servers

50-60%

48-50%

48%

IT Data Center Typical Power Consumption

37-40%

80%

The New Data Center 79

Page 99: 87652141 the-new-data-center-brocade

Chapter 6: The New Data Center LAN

ity, and performance. Application separation via VLANs at Layer 2 orVRF at Layer 3, for example, can provide a means to better meet ser-vice-level agreements (SLAs) and conform to regulatory compliancerequirements. Likewise, network virtualization can be used to createlogically separate security zones for policy enforcement withoutdeploying physically separate networks.

Application Delivery InfrastructureOne of the major transformations in business applications over thepast few years has been the shift from conventional applications toWeb-based enterprise applications. Use of Internet-enabled protocolssuch as HTTP (HyperText Transfer Protocol) and HTTPS (HyperTextTransfer Protocol Secure) has streamlined application developmentand delivery and is now a prerequisite for next-generation cloud com-puting solutions. At the same time, however, Web-based enterpriseapplications present a number of challenges due to increased networkand server loads, increased user access, greater applications load,and security concerns. The concurrent proliferation of virtualized serv-ers helps to alleviate the application workload issues but addscomplexity in designing resilient configurations that can provide con-tinuous access. As discussed in “Chapter 3: Doing More with Less”starting on page 17, implementing a successful server virtualizationplan requires careful attention to both upstream LAN network impactas well as downstream SAN impact. Application delivery controllers(also known as Layer 4–7 switches) provide a particularly effectivemeans to address the upstream network consequences of increasedtraffic volumes when Web-based enterprise applications are sup-ported on higher populations of virtualized servers.

Figure 41. Application congestion (traffic shown as a dashed line) on a Web-based enterprise application infrastructure.

Layer 2-3 switches

Network

Web/Mail/DNSMicrosoftSapOracle

Clients

Applications

80 The New Data Center

Page 100: 87652141 the-new-data-center-brocade

Application Delivery Infrastructure

As illustrated in Figure 41, conventional network switching and routingcannot prevent higher traffic volumes generated by user activity fromoverwhelming applications. Without a means to balance the workloadbetween applications servers, response time suffers even when thenumber of application server instances has been increased via servervirtualization. In addition, whether access is over the Internet orthrough a company intranet, security vulnerabilities such as DoSattacks still exist.

The Brocade ServerIron ADX application delivery controller addressesthese problems by providing hardware-assisted protocol processingoffload, server workload balancing, and firewall protection to ensurethat application access is distributed among the relevant servers andthat access is secured. As shown in Figure 42, this solution solvesmultiple application-related issues simultaneously. By implementingWeb-based protocol processing offload, server CPU cycles can be usedmore efficiently to process application requests. In addition, load bal-ancing across multiple servers hosting the same application canensure that no individual server or virtual machine is overwhelmedwith requests. By also offloading HTTPS/SSL security protocols, theBrocade ServerIron ADX provides the intended level of security withoutfurther burdening the server pool. The Brocade ServerIron ADX alsoprovides protection against DoS attacks and so facilitates applicationavailability.

Figure 42. Application workload balancing, protocol processing offload and security via the Brocade ServerIron ADX.

Layer 2-3 switches

Network

Web/Mail/DNSMicrosoft

SSL/encryptionFirewallsMail RadiusDNS IPD/IDS Cache Switches

SapOracle

Clients

Brocade ServerIron ADXApplication Delivery Controllers

Applications

VMs

The New Data Center 81

Page 101: 87652141 the-new-data-center-brocade

Chapter 6: The New Data Center LAN

The value of application delivery controllers in safeguarding and equal-izing application workloads appreciates as more business applicationsshift to Web-based applications. Cloud computing is the ultimateextension of this trend, with the applications themselves migratingfrom clients to enterprise application servers, which can be physicallylocated across dispersed data center locations or outsourced to ser-vice providers. The Brocade ServerIron ADX provides global server loadbalancing (GSLB) for load balancing not only between individual serv-ers but between geographically dispersed server or VM farms. WithGSLB, clients are directed to the best site for the fastest content deliv-ery given current workloads and optimum network response time. Thisapproach also integrates enterprise-wide disaster recovery for applica-tion access without disruption to client transactions.

As with other network solutions, the benefits of application deliverycontroller technology can be maintained only if the product architec-ture maintains or improves client performance. Few network designersare willing to trade network response time for enhanced services. TheBrocade ServerIron ADX, for example, provides an aggregate of 70Gbps Layer 7 throughput and over 16 million Layer 4 transactions persecond. Although the Brocade ServerIron ADX sits physically in thepath between the network and its servers, performance is actuallysubstantially increased compared to conventional connectivity. Inaddition to performance, the Brocade ServerIron ADX maintains Bro-cade's track record of providing the industry's most energy efficientnetwork products by using less than half the power of the closest com-peting application delivery controller product.

82 The New Data Center

Page 102: 87652141 the-new-data-center-brocade

The New Data Center

7

OrchestrationAutomating data center processes

So far virtualization has been “the” buzzword of twenty-first century ITparliance and unfortunately has undergone depreciation due to over-use-in particular, over-marketing-of the term. As with elephants andblind men, virtualization appears to mean different things to differentpeople depending on their areas of responsibility and unique issues.For revitalizing an existing data center or designing a new one, theumbrella term “virtualization” covers three primary domains: virtual-ization of compute power in the form of server virtualization,virtualization of data storage capacity in the form of storage virtualiza-tion, and virtualization of the data transport in the form of networkvirtualization. The common denominator between these three primarydomains of virtualization is the use of new technology to streamlineand automate IT processes while maximizing productive use of thephysical IT infrastructure.

As with graphical user interfaces, virtualization hides the complexity ofunderlying hardware elements and configurations. The complexitydoes not go away but is now the responsibility of the virtualization layerinserted between physical and logical domains. From a client perspec-tive, for example, an application running on a single physical serverbehaves the same as one running on a virtual machine. In this exam-ple, the hypervisor assumes responsibility for supplying all theexpected CPU, memory, I/O, and other elements typical of a conven-tional server. The level of actual complexity of a virtualizedenvironment is powers of ten greater than ordinary configurations butso is the level of productivity and resource utilization. The sameapplies to the other domains of storage and network virtualization,and this places tremendous importance on the proper selection ofproducts to extend virtualization across the enterprise.

83

Page 103: 87652141 the-new-data-center-brocade

Chapter 7: Orchestration

Next-generation data center design necessarily incorporates a varietyof virtualization technologies, but to virtualize the entire data centerrequires first of all a means to harmoniously orchestrate these tech-nologies into an integral solution, as depicted in Figure 43. Becauseno single vendor can provide all the myriad elements found in a mod-ern data center, orchestration requires vendor cooperation and newopen systems standards to ensure stability and resilience. The alterna-tive is proprietary solutions and products and the implicit vendormonopoly that accompanies single-source technologies. The marketlong ago rejected this vendor lock-in and has consistently supportedan open systems approach to technology development anddeployment.

Figure 43. Open systems-based orchestration between virtualization domains.

For large-scale virtualization environments, standards-based orches-tration is all the more critical because virtualization in each domain isstill undergoing rapid technical development. The Distributed Manage-ment Task Force (DMTF), for example, developed the Open VirtualMachine Format (OVF) standard for VM deployment and mobility. TheStorage Networking Industry Association (SNIA) Storage ManagementInitiative (SMI) includes open standards for deployment and manage-ment of virtual storage environments. The American NationalStandards Institute T11.5 work group developed the Fabric ApplicationInterface Standard (FAIS) to promote open APIs for implementing stor-age virtualization via the fabric. IEEE and IETF have progressivelydeveloped more sophisticated open standards for network virtualiza-tion, from VLANs to VRF. The development of open standards and

Storagevirtualization

Servervirtualization

Netowrkvirtualization

APIs

APIs APIs

Orchestrationframework

84 The New Data Center

Page 104: 87652141 the-new-data-center-brocade

common APIs is the prerequisite for developing comprehensiveorchestration frameworks that can automate the creation, allocation,and management of virtualized resources across data centerdomains. In addition, open standards become the guideposts for fur-ther development of specific virtualization technologies, so thatvendors can develop products with a much higher degree ofinteroperability.

Data center orchestration assumes that a single conductor—in thiscase, a single management framework—provides configuration,change, and monitoring management over an IT infrastructure that isbased on a complex of virtualization technologies. This in turn impliesthat the initial deployment of an application, any changes to its envi-ronment, and proactive monitoring of its health are no longer manualprocesses but are largely automated according to a set of defined ITpolicies. Enabled by open APIs in the server, storage, and networkdomains, the data center infrastructure automatically allocates therequisite CPU, memory, I/O, and resilience for a particular application;assigns storage capacity, boot LUNs, and any required security or QoSparameters needed for storage access; and provides optimized clientaccess through the data communications network such as VLANs,application delivery, load balancing, or other network tuning to supportthe application. As application workloads change over time, the appli-cation can be migrated from one server resource to another, storagevolumes increased or decreased, QoS levels adjusted appropriately,security status changed, and bandwidth adjusted for upstream clientaccess. The ideal of data center orchestration is that the configura-tion, deployment, and management of applications on the underlyinginfrastructure should require little or no human intervention andinstead rely on intelligence engineered into each domain.

Of course, servers, storage, and network equipment do not rack them-selves up and plug themselves in. The physical infrastructure must beproperly sized, planned, selected, and deployed before logical automa-tion and virtualization can be applied. With tight budgets, it may not bepossible to provision all the elements needed for full data centerorchestration, but careful selection of products today can lay thegroundwork for fuller implementation tomorrow.

With average corporate data growth at 60% per year, data centerorchestration is becoming a business necessity. Companies cannotcontinue to add staff to manage increased volumes of applicationsand data, and administrator productivity cannot meet growth rateswithout full-scale virtualization and automation of IT processes. Serv-ers, storage, and networking, which formerly stood as isolated

The New Data Center 85

Page 105: 87652141 the-new-data-center-brocade

Chapter 7: Orchestration

management domains, are being transformed into interrelated ser-vices. For Brocade, technology, network infrastructure as a servicerequires richer intelligence in the network to coordinate provisioning ofbandwidth, QoS, resiliency, and security features to support server andstorage services.

Because this uber-technology is still under construction, not all neces-sary components are currently available but substantial progress hasbeen made. Server virtualization, for example, is now a mature tech-nology that is moving from secondary applications to primary ones.Brocade is working with VMware, Microsoft, and others to coordinatecommunication between the SAN and LAN infrastructure and variousvirtualization hypervisors so that proactive monitoring of storage band-width and QoS can trigger migration of VMs to more availableresources should congestion occur, as shown in Figure 44.

Figure 44. Brocade Management Pack for Microsoft Service Center Virtual Machine Manager leverages APIs between the SAN and SCVMM to trigger VM migration.

The SAN Call Home events displayed in the Microsoft System CenterOperations Center interface is shown in Figure 50 on page 94.

QoSEngine

QoSEngine

Microsoft SystemCenter VMM

Microsoft System Center Operations

Manager

Brocade HBA plusBrocade Management Pack forMicrosoft System Center VMM

VMs move from first physical server to next available

QoSEngine

SAN

BrocadeDCFM

LAN

86 The New Data Center

Page 106: 87652141 the-new-data-center-brocade

On the storage front, Brocade supports fabric-based storage virtualiza-tion with the Brocade FA4-18 Application Blade and Brocade's StorageApplication Services (SAS) APIs. Based on FAIS standards, the BrocadeFA4-18 supports applications such as EMC Invista to maximize effi-cient utilization of storage assets. For client access, the Brocade ADXapplication delivery controller automates load balancing of clientrequests and offloads upper-layer protocol processing from the desti-nation VMs. Other capabilities such as 10 Gigabit Ethernet and 8 GbpsFibre Channel connectivity, fabric-based storage encryption and virtualrouting protocols can help data center network designers allocateenhanced bandwidth and services to accommodate both currentrequirements and future growth. Collectively, these building blocksfacilitate higher degrees of data center orchestration to achieve the ITbusiness goal of doing far more with much less.

The New Data Center 87

Page 107: 87652141 the-new-data-center-brocade

Chapter 7: Orchestration

88 The New Data Center

Page 108: 87652141 the-new-data-center-brocade

The New Data Center

8

Brocade Solutions Optimizedfor Server VirtualizationEnabling server consolidation and end-to-endfabric management

Brocade has engineered a number of different network componentsthat enable server virtualization in the data center fabric. The sectionsin this chapter introduce you to these products and briefly describethem. For the most current information, visit www.brocade.com > Prod-ucts and Solutions. Choose a product from the drop-down list on theleft and then scroll down to view Data Sheets, FAQs, Technical Briefs,and White Papers.

The server connectivity and convergence products described in thischapter are:

• “Server Adapters” on page 89

• “Brocade 8000 Switch and FCOE10-24 Blade” on page 92

• “Access Gateway” on page 93

• “Brocade Management Pack” on page 94

• “Brocade ServerIron ADX” on page 95

Server AdaptersIn mid-2008, Brocade released a family of Fibre Channel HBAs with 8 and4 Gbps HBAs. Highlights of Brocade FC HBAs include:

• Maximizes bus throughput with a Fibre Channel-to-PCIe 2.0aGen2 (x8) bus interface with intelligent lane negotiation

• Prioritizes traffic and minimizes network congestion with targetrate limiting, frame-based prioritization, and 32 Virtual Channelsper port with guaranteed QoS

89

Page 109: 87652141 the-new-data-center-brocade

Chapter 8: Brocade Solutions Optimized for Server Virtualization

• Enhances security with Fibre Channel-Security Protocol (FC-SP) fordevice authentication and hardware-based AES-GCM; ready for in-flight data encryption

• Supports virtualized environments with NPIV for 255 virtual ports

• Uniquely enables end-to-end (server-to-storage) management inBrocade Data Center Fabric environments

Brocade 825/815 FC HBAThe Brocade 815 (single port) and Brocade 825 (dual ports) 8 GbpsFibre Channel-to-PCIe HBAs provides industry-leading server connec-tivity through unmatched hardware capabilities and unique softwareconfigurability. This class of HBAs is designed to help IT organizationsdeploy and manage true end-to-end SAN service across next-genera-tion data centers.

Figure 45. Brocade 825 FC 8 Gbps HBA (dual ports shown).

The Brocade 8 Gbps FC HBA also:

• Maximizes I/O transfer rates with up to 500,000 IOPS per port at 8 Gbps

• Utilizes N_Port Trunking capabilities to create a single logical 16 Gbps high-speed link

90 The New Data Center

Page 110: 87652141 the-new-data-center-brocade

Server Adapters

Brocade 425/415 FC HBAThe Brocade 4 Gbps FC HBA has capabilities similar to thosedescribed for the 8 Gbps version. The Brocade 4 Gbps FC HBA also:

• Maximizes I/O transfer rates with up to 500,000 IOPS per port at4 Gbps

• Utilizes N_Port Trunking capabilities to create a single logical 8 Gbps high-speed link

Figure 46. Brocade 415 FC 4 Gbps HBA (single port shown).

Brocade FCoE CNAsThe Brocade 1010 (single port) and Brocade 1020 (dual ports) 10Gbps Fibre Channel over Ethernet-to-PCIe CNAs provide server I/O con-solidation by transporting both storage and Ethernet networking trafficacross the same physical connection. Industry-leading hardware capa-bilities, unique software configurability, and unified management allcontribute to exceptional flexibility.

The Brocade 1000 Series CNAs combine the powerful capabilities ofstorage (Fibre Channel) and networking (Ethernet) devices. Thisapproach helps improve TCO by significantly reducing power, cooling,and cabling costs through the use of a single adapter. It also extendsstorage and networking investments, including investments made inmanagement and training. Utilizing hardware-based virtualizationacceleration capabilities, organizations can optimize performance invirtual environments to increase overall ROI and improve TCO evenfurther.

The New Data Center 91

Page 111: 87652141 the-new-data-center-brocade

Chapter 8: Brocade Solutions Optimized for Server Virtualization

Leveraging IEEE standards for Data Center Bridging (DCB), the Bro-cade 1000 Series CNAs provide a highly efficient way to transportFibre Channel storage traffic over Ethernet links-addressing the highlysensitive nature of storage traffic.

Figure 47. Brocade 1020 (dual ports) 10 Gbps Fibre Channel over Ethernet-to-PCIe CNA.

Brocade 8000 Switch and FCOE10-24 BladeThe Brocade 8000 is a top-of-rack Layer 2 CEE/FCoE switch with 24 x10 GbE ports for LAN connections and 8 x FC ports (with up to 8 Gbpsspeed) for Fibre Channel SAN connections. The Brocade 8000 pro-vides advanced Fibre Channel services, supports Ethernet and CEEcapabilities, and is managed by Brocade DCFM.

Supporting Windows and Linux environments, the Brocade 8000Switch enables access to both LANs and SANs over a common serverconnection by utilizing Converged Enhanced Ethernet (CEE) and FCoEprotocols. LAN traffic is forwarded to aggregation-layer Ethernetswitches using conventional 10 GbE connections, and storage traffic isforwarded to Fibre Channel SANs over 8 Gbps FC connections.

Figure 48. Brocade 8000 Switch.

92 The New Data Center

Page 112: 87652141 the-new-data-center-brocade

Access Gateway

The Brocade FCOE10-24 Blade is a Layer 2 blade with cut-though, non-blocking architecture designed for use with the Brocade DCX. It fea-tures 24 x 10 Gbps CEE ports and extends CEE/FCoE capabilities tobackbone platforms enabling end-of-row CEE/FCoE deployment. Byproviding first-hop connectivity for access-layer servers, the BrocadeFCOE10-24 also enables server I/O consolidation for servers with Tier3 and some Tier 2 virtualized applications.

Figure 49. Brocade FCOE10-24 Blade.

Access GatewayBrocade Access Gateway simplifies server and storage connectivity byenabling direct connection of servers to any SAN fabric-enhancingscalability by eliminating the switch domain identity and simplifyinglocal switch device management. Brocade blade server SAN switchesand the Brocade 300 and Brocade 5100 rack-mount switches are keycomponents of enterprise data centers, bringing a wide variety of scal-ability, manageability, and cost advantages to SAN environments.These switches can be used in Access Gateway mode, available in thestandard Brocade Fabric OS, for enhanced server connectivity toSANs.

Access Gateway provides:

• Seamless connectivity with any SAN fabric

• Improved scalability

• Simplified management

• Automatic failover and failback for high availability

• Lower total cost of ownership

The New Data Center 93

Page 113: 87652141 the-new-data-center-brocade

Chapter 8: Brocade Solutions Optimized for Server Virtualization

Access Gateway mode eliminates traditional heterogeneous switch-to-switch interoperability challenges by utilizing NPIV standards to pres-ent Fibre Channel server connections as logical devices to SAN fabrics.Attaching through NPIV-enabled edge switches or directors, AccessGateway seamlessly connects servers to Brocade, McDATA, Cisco, orother SAN fabrics.

Brocade Management PackBrocade Management Pack for Microsoft System Center monitors thehealth and performance of Brocade HBA-to-SAN links and works withMicrosoft System Center to provide intelligent recommendations fordynamically optimizing the performance of virtualized workloads. Itprovides Brocade HBA performance and health monitoring capabilitiesto System Center Operations Manager (SCOM), and that informationcan be used to dynamically optimize server resources in virtualizeddata centers via System Center Virtual Machine Manager (SCVMM).

It enables real-time monitoring of Brocade HBA links through SCOM,combined with proactive remediation action in the form of recom-mended Performance and Resource Optimization (PRO) Tips handledby SCVMM. As a result, IT organizations can improve efficiency whilereducing their overall operating costs.

Figure 50. SAN Call Home events displayed in the Microsoft System Center Operations Center interface.

94 The New Data Center

Page 114: 87652141 the-new-data-center-brocade

Brocade ServerIron ADX

Brocade ServerIron ADXThe Brocade ServerIron ADX Series of switches provides Layer 4–7switching performance in an intelligent, modular application deliverycontroller platform. The switches—including the ServerIron ADX 1000,4000, and 10000 models—enable highly secure and scalable serviceinfrastructures to help applications run more efficiently and withhigher availability. ServerIron ADX switches use detailed applicationmessage information beyond the traditional Layer 2 and 3 packetheaders, directing client requests to the most available servers. Theseintelligent application switches transparently support any TCP- or UDP-based application by providing specialized acceleration, content cach-ing, firewall load balancing, network optimization, and host offloadfeatures for Web services.

The Brocade ServerIron ADX Series also provides a reliable line ofdefense by securing servers and applications against many types ofintrusion and attack without sacrificing performance.

All Brocade ServerIron ADX switches forward traffic flows based onLayer 4–7 definitions, and deliver industry-leading performance forhigher-layer application switching functions. Superior content switch-ing capabilities include customizable rules based on URL, HOST, andother HTTP headers, as well as cookies, XML, and application content.

Brocade ServerIron ADX switches simplify server farm managementand application upgrades by enabling organizations to easily removeand insert resources into the pool. The Brocade ServerIron ADX pro-vides hardware-assisted, standards-based network monitoring for allapplication traffic, improving manageability and security for networkand server resources. Extensive and customizable service healthcheck capabilities monitor Layer 2, 3, 4, and 7 connectivity along withservice availability and server response, enabling real-time problemdetection. To optimize application availability, these switches supportmany high-availability mode options, with real-time session synchroni-zation between two Brocade ServerIron ADX switches to protectagainst session loss during outages.

Figure 51. Brocade ServerIron ADX 1000.

The New Data Center 95

Page 115: 87652141 the-new-data-center-brocade

Chapter 8: Brocade Solutions Optimized for Server Virtualization

96 The New Data Center

Page 116: 87652141 the-new-data-center-brocade

The New Data Center

9

Brocade SAN SolutionsMeeting the most demanding data center requirements today and tomorrow

Brocade leads the pack in networked storage from the development ofFibre Channel to its current family of high-performance, energy-effi-cient SAN switches, directors, and backbones and advanced fabriccapabilities such as encryption and distance extension. The sectionsin this chapter introduce you to these products and briefly describethem. For the most current information, visit www.brocade.com > Prod-ucts and Solutions. Choose a product from the drop-down list on theleft and then scroll down to view Data Sheets, FAQs, Technical Briefs,and White Papers.

The SAN products described in this chapter are:

• “Brocade DCX Backbones (Core)” on page 98

• “Brocade 8 Gbps SAN Switches (Edge)” on page 100

• “Brocade Encryption Switch and FS8-18 Encryption Blade” onpage 105

• “Brocade 7800 Extension Switch and FX8-24 Extension Blade” onpage 106

• “Brocade Optical Transceiver Modules” on page 107

• “Brocade Data Center Fabric Manager” on page 108

97

Page 117: 87652141 the-new-data-center-brocade

Chapter 9: Brocade SAN Solutions

Brocade DCX Backbones (Core)The Brocade DCX and DCX-4S Backbone offer flexible managementcapabilities as well as Adaptive Networking services and fabric-basedapplications to help optimize network and application performance. Tominimize risk and costly downtime, the platform leverages the proven“five-nines” (99.999%) reliability of hundreds of thousands of BrocadeSAN deployments.

Figure 52. Brocade DCX (left) and DCX-4S (right) Backbone.

The Brocade DCX facilitates the consolidation of server-to-server,server-to-storage, and storage-to-storage networks with highly avail-able, lossless connectivity. In addition, it operates natively withBrocade and Brocade M-Series components, extending SAN invest-ments for maximum ROI. It is designed to support a broad range ofcurrent and emerging network protocols to form a unified, high-perfor-mance data center fabric.

98 The New Data Center

Page 118: 87652141 the-new-data-center-brocade

Brocade DCX Backbones (Core)

Table 1. Brocade DCX Capabilities

Feature Details

Industry-leading capabilities for large enterprises

• Industry-leading Performance 8 Gbps per-port, full-line-rate performance

• 13 Tbps aggregate dual-chassis bandwidth (6.5 Tbps for a single chassis)

• 1 Tbps of aggregate ICL bandwidth • More than 5x the performance of competitive

offerings

High scalability • High-density, bladed architecture• Up to 384 8 Gbps Fibre Channel ports in a single

chassis • Up to 768 8 Gbps Fibre Channel ports in a dual-

chassis configuration • 544 Gbps aggregate bandwidth per slot plus local

switching • Fibre Channel Integrated Routing• Specialty blades for 10 Gbps connectivity,

Fibre Channel Routing over IP, and fabric-based applications

Energy efficiency • Energy efficiency less than one-half watt per Gbps• 10x more energy efficient than competitive

offerings

Ultra-High Availability

• Designed to support 99.99% uptime• Passive backplane, separate and redundant

control processor and core switching blades • Hot-pluggable components, including redundant

power supplies, fans, WWN cards, blades, optics

Fabric services and applications

• Adaptive Networking services, including QoS, ingress rate limiting, traffic isolation, and Top Talkers

• Plug-in services for fabric-based storage virtualization, continuous data protection and replication, and online data migration

Multiprotocol capabilities and fabric interoperability

• Support for Fibre Channel, FICON, FCIP, and IPFC • Designed for future 10 Gigabit Ethernet (GbE),

Data Center Bridging (DCB), and Fibre Channel over Ethernet (FCoE)

• Native connectivity in Brocade and BrocadeM-Series fabrics, including backward and forward compatibility

The New Data Center 99

Page 119: 87652141 the-new-data-center-brocade

Chapter 9: Brocade SAN Solutions

Brocade 8 Gbps SAN Switches (Edge)Industry-leading Brocade switches are the foundation for connectingservers and storage devices in SANs, enabling organizations to accessand share data in a high-performance, manageable, and scalablemanner. To protect existing investments, Brocade switches are fullyforward and backward compatible—providing a seamless migrationpath to 8 Gbps connectivity and future technologies. This capabilityenables organizations to deploy 1, 2, 4, and 8 Gbps fabrics with highlyscalable core-to-edge configurations.

Brocade standalone switch models offer flexible configurations rang-ing from 8 to 80 ports, and can function as core or edge switches,depending upon business requirements. With native E_Port interoper-ability, Brocade switches connect to the vast majority of fabrics inoperation today, allowing organizations to seamlessly integrate andscale their existing SAN infrastructures. Moreover, Brocade switchesare backed by FOS engineering, test, and support expertise to providereliable operation in mixed fabrics. All switches feature flexible portconfiguration with Ports On Demand capabilities for straightforwardscalability. Organizations can also experience high performancebetween switches by using Brocade ISL Trunking to achieve up to 64Gbps total throughput

Intelligent management and monitoring

• Full utilization of the Brocade Fabric OS embedded operating system

• Flexibility to utilize a CLI, Brocade DCFM, Brocade Advanced Web Tools, and Brocade Advanced Performance Monitoring

• Integration with third-party management tools

Table 1. Brocade DCX Capabilities

Feature Details

100 The New Data Center

Page 120: 87652141 the-new-data-center-brocade

Brocade 8 Gbps SAN Switches (Edge)

Brocade switches meet high-availability requirements with Brocade5300, 5100, and 300 switches offering redundant, hot-pluggablecomponents. All Brocade switches feature non-disruptive softwareupgrades, automatic path rerouting, and extensive diagnostics. Lever-aging the Brocade networking model, these switches can provide afabric capable of delivering overall system

Designed for flexibility, Brocade switches provide a low-cost solutionfor Direct-Attached Storage (DAS)-to-SAN migration, small SAN islands,Network-Attached Storage (NAS) back-ends, and the edge of core-to-edge enterprise SANs. As a result, these switches are ideal as stand-alone departmental SANs or as high-performance edge switches inlarge enterprise SANs.

The Brocade 5300 and 5100 switches support full Fibre Channel rout-ing capabilities with the addition of the Fibre Channel IntegratedRouting (IR) option. Using built-in routing capabilities, organizationscan selectively share devices while still maintaining remote fabric iso-lation. They include a Virtual Fabrics feature that enables thepartitioning of a physical SAN into logical fabrics. This provides fabricisolation by application, business group, customer, or traffic type with-out sacrificing performance, scalability, security, or reliability.

Brocade 5300 SwitchAs the value and volume of business data continue to rise, organiza-tions need technology solutions that are easy to implement andmanage and that can grow and change with minimal disruption. TheBrocade 5300 Switch is designed to consolidate connectivity in rapidlygrowing mission-critical environments, supporting 1, 2, 4, and 8 Gbpstechnology in configurations of 48, 64, or 80 ports in a 2U chassis.The combination of density, performance, and “pay-as-you-grow” scal-ability increases server and storage utilization, while reducingcomplexity for virtualized servers and storage.

Figure 53. Brocade 5300 Switch.

The New Data Center 101

Page 121: 87652141 the-new-data-center-brocade

Chapter 9: Brocade SAN Solutions

Used at the fabric core or at the edge of a tiered core-to-edge infra-structure, the Brocade 5300 operates seamlessly with existingBrocade switches through native E_Port connectivity into Brocade FOSor M-EOS) environments. The design makes it very efficient in power,cooling, and rack density to help enable midsize and large server andstorage consolidation. The Brocade 5300 also includes Adaptive Net-working capabilities to more efficiently manage resources in highlyconsolidated environments. It supports Fibre Channel Integrated Rout-ing for selective device sharing and maintains remote fabric isolationfor higher levels of scalability and fault isolation.

The Brocade 5300 utilizes ASIC technology featuring eight 8-portgroups. Within these groups, an inter-switch link trunk can supply up to68 Gbps of balanced data throughput. In addition to reducing conges-tion and increasing bandwidth, enhanced Brocade ISL Trunkingutilizes ISLs more efficiently to preserve the number of usable switchports. The density of the Brocade 5300 uniquely enables fan-out fromthe core of the data center fabric with less than half the number ofswitch devices to manage compared to traditional 32- or 40-port edgeswitches.

Brocade 5100 SwitchThe Brocade 5100 Switch is designed for rapidly growing storagerequirements in mission-critical environments combining 1, 2, 4, and8 Gbps Fibre Channel technology in configurations of 24, 32, or 40ports in a 1U chassis. As a result, it provides low-cost access to indus-try-leading SAN technology and pay-as-you-grow scalability forconsolidating storage and maximizing the value of virtual serverdeployments.

Figure 54. Brocade 5100 Switch.

Similar to the Brocade 5300, the Brocade 5100 features a flexiblearchitecture that operates seamlessly with existing Brocade switchesthrough native E_Port connectivity into Brocade FOS or M-EOS environ-ments. With the highest port density of any midrange enterpriseswitch, it is designed for a broad range of SAN architectures, consum-ing less than 2.5 watts of power per port for exceptional power andcooling efficiency. It features consolidated power and fan assemblies

102 The New Data Center

Page 122: 87652141 the-new-data-center-brocade

Brocade 8 Gbps SAN Switches (Edge)

to improve environmental performance. The Brocade 5100 is a cost-effective building block for standalone networks or the edge of enter-prise core-to-edge fabrics.

Additional performance capabilities include the following:

• 32 Virtual Channels on each ISL enhance QoS traffic prioritizationand “anti-starvation” capabilities at the port level to avoid perfor-mance degradation.

• Exchange-based Dynamic Path Selection optimizes fabric-wideperformance and load balancing by automatically routing data tothe most efficient available path in the fabric. It augments ISLTrunking to provide more effective load balancing in certain con-figurations. In addition, DPS can balance traffic between theBrocade 5100 and Brocade M-Series devices enabled with Bro-cade Open Trunking.

Brocade 300 SwitchThe Brocade 300 Switch provides small to midsize enterprises withSAN connectivity that simplifies IT management infrastructures,improves system performance, maximizes the value of virtual serverdeployments, and reduces overall storage costs. The 8 Gbps FibreChannel Brocade 300 provides a simple, affordable, single-switchsolution for both new and existing SANs. It delivers up to 24 ports of 8Gbps performance in an energy-efficient, optimized 1U form factor.

Figure 55. Brocade 300 Switch.

To simplify deployment, the Brocade 300 features the EZSwitchSetupwizard and other ease-of-use and configuration enhancements, aswell as the optional Brocade Access Gateway mode of operation (sup-ported with 24-port configurations only). Access Gateway modeenables connectivity into any SAN by utilizing NPIV switch standards topresent Fibre Channel connections as logical devices to SAN fabrics.Attaching through NPIV-enabled switches and directors, the Brocade300 in Access Gateway mode can connect to FOS-based, M-EOS-based, or other SAN fabrics.

The New Data Center 103

Page 123: 87652141 the-new-data-center-brocade

Chapter 9: Brocade SAN Solutions

Organizations can easily enable Access Gateway mode (see page 151)via the FOS CLI, Brocade Web Tools, or Brocade Fabric Manager. Keybenefits of Access Gateway mode include:

• Improved scalability for large or rapidly growing server and virtualserver environments

• Simplified management through the reduction of domains andmanagement tasks

• Fabric interoperability for mixed vendor SAN configurations thatrequire full functionality

Brocade VA-40FC SwitchThe Brocade VA-40FC is a high-performance Fibre Channel edgeswitch optimized for server connectivity in large-scale enterprise SANsAs organizations consolidate data centers, expand application ser-vices, and begin to implement cloud initiatives, large-scale serverarchitectures are becoming a standard part of the data center. Mini-mizing the network deployment steps and simplifying managementcan help organizations grow seamlessly while reducing operatingcosts.

The Brocade VA-40FC helps meet this challenge, providing the firstFibre Channel edge switch optimized for server connectivity in largecore-to-edge SANs. By leveraging Brocade Access Gateway technology,the Brocade VA-40FC enables zero-configuration deployment andreduces management of the network edge—increasing scalability andsimplifying management for large-scale server architectures.

Figure 56. Brocade VA-40FC Switch.

The Brocade VA-40FC is in Access Gateway mode by default, which isideal for larger SAN fabrics that can benefit from the scalability offixed-port switches at the edge of the network. Some use cases forAccess Gateway mode are:

• Connectivity of many servers into large SAN fabrics

• Connectivity of servers into Brocade, Cisco, or any NPIV-enabledSAN fabrics

• Connectivity into multiple SAN fabrics

104 The New Data Center

Page 124: 87652141 the-new-data-center-brocade

Brocade Encryption Switch and FS8-18 Encryption Blade

The Brocade VA-40FC also supports Fabric Switch mode to providestandard Fibre Channel switching and routing capabilities that areavailable on all Brocade enterprise-class 8 Gbps solutions.

Brocade Encryption Switch and FS8-18 Encryption BladeThe Brocade Encryption Switch is a high-performance standalonedevice for protecting data-at-rest in mission-critical environments. Itscales non-disruptively, providing from 48 up to 96 Gbps of diskencryption processing power. Moreover, the Brocade EncryptionSwitch is tightly integrated with industry-leading, enterprise-class keymanagement systems that can scale to support key lifecycle servicesacross distributed environments.

It is also FIPS 140-2 Level 3-compliant. Based on industry standards,Brocade encryption solutions for data-at-rest provide centralized, scal-able encryption services that seamlessly integrate into existingBrocade Fabric OS environments.

Figure 57. Brocade Encryption Switch.

Figure 58. Brocade FS8-18 Encryption Blade.

The New Data Center 105

Page 125: 87652141 the-new-data-center-brocade

Chapter 9: Brocade SAN Solutions

Brocade 7800 Extension Switch and FX8-24 Extension BladeThe Brocade 7800 Extension Switch helps provide network infrastruc-ture for remote data replication, backup, and migration. Leveragingnext-generation Fibre Channel and advanced FCIP technology, the Bro-cade 7800 provides a flexible and extensible platform to move moredata faster and further than ever before.

It can be configured for simple point-to-point or comprehensive multi-site SAN extension. Up to 16 x 8 Gbps Fibre Channel ports and 6 x 1GbE ports provide unmatched Fibre Channel and FCIP bandwidth, portdensity, and throughput for maximum application performance overWAN links.

Figure 59. Brocade 7800 Extension Switch.

The Brocade 7800 is an ideal platform for building or expanding ahigh-performance SAN extension infrastructure. It leverages cost-effective IP WAN transport to extend open systems and mainframedisk and tape storage applications over distances that would other-wise be impossible, impractical, or too expensive with standard FibreChannel connections. A broad range of optional advanced extension,FICON, and SAN fabric services are available.

• The Brocade 7800 16/6 Extension Switch is a robust platform fordata centers and multisite environments implementing disk andtape solutions for open systems and mainframe environments.Organizations can optimize bandwidth and throughput through 16x 8 Gbps FC ports and 6 x 1 GbE ports.

• The Brocade 7800 4/2 Extension Switch is a cost-effective optionfor smaller data centers and remote offices implementing point-to-point disk replication for open systems. Organizations can opti-mize bandwidth and throughput through 4 x 8 Gbps FC ports and2 x 1 GbE ports. The Brocade 7800 4/2 can be easily upgraded tothe Brocade 7800 16/6 through software licensing.

106 The New Data Center

Page 126: 87652141 the-new-data-center-brocade

Brocade Optical Transceiver Modules

The Brocade FX8-24 Extension Blade, designed specifically for the Bro-cade DCX Backbone, helps provide the network infrastructure forremote data replication, backup, and migration. Leveraging next-gen-eration 8 Gbps Fibre Channel, 10 GbE and advanced FCIP technology,the Brocade FX8-24 provides a flexible and extensible platform tomove more data faster and further than ever before.

Figure 60. Brocade FX8-24 Extension Blade.

Up to two Brocade FX8-24 blades can be installed in a Brocade DCX orDCX-4S Backbone. Activating the optional 10 GbE ports doubles theaggregate bandwidth to 20 Gbps and enables additional FCIP portconfigurations (10 x 1 GbE ports and 1 x 10 GbE port, or 2 x 10 GbEports).

Brocade Optical Transceiver ModulesBrocade optical transceiver modules, also known as Small Form-factorPluggables (SFPs), plug into Brocade switches, directors, and back-bones to provide Fibre Channel connectivity and satisfy a wide rangeof speed and distance requirements. Brocade transceiver modules areoptimized for Brocade 8 Gbps platforms to maximize performance,reduce power consumption, and help ensure the highest availability ofmission-critical applications. These transceiver modules support datarates up to 8 Gbps Fibre Channel and link lengths up to 30 kilometers(for 4 Gbps Fibre Channel).

The New Data Center 107

Page 127: 87652141 the-new-data-center-brocade

Chapter 9: Brocade SAN Solutions

Brocade Data Center Fabric ManagerBrocade Data Center Fabric Manager (DCFM) Enterprise unifies themanagement of large, multifabric, or multisite storage networksthrough a single pane of glass. It features enterprise-class reliability,availability, and serviceability (RAS), as well as advanced features suchas proactive monitoring and alert notification. As a result, it helps opti-mize storage resources, maximize performance, and enhance thesecurity of storage network infrastructures.

Brocade DCFM Enterprise configures and manages Brocade DCXBackbone family, directors, switches, and extension solutions, as wellas Brocade data-at-rest encryption, FCoE/DCB, HBA, and CNA prod-ucts. It is part of a common framework designed to manage entiredata center fabrics, from the storage ports to the HBAs, both physicaland virtual. Brocade DCFM Enterprise tightly integrates with BrocadeFabric OS (FOS) to leverage key features such as Advanced Perfor-mance Monitoring, Fabric Watch, and Adaptive Networking services.As part of a common management ecosystem, Brocade DCFM Enter-prise integrates with leading partner data center automation solutionsthrough frameworks such as the Storage Management Initiative-Speci-fication (SMI-S).

Figure 61. Brocade DCFM main window showing the topology view.

108 The New Data Center

Page 128: 87652141 the-new-data-center-brocade

The New Data Center

10

Brocade LAN NetworkSolutionsEnd-to-end networking from the edge to the core of today's networking infrastructures

Brocade offers a complete line of enterprise and service providerEthernet switches, Ethernet routers, application management, andnetwork-wide security products. With industry-leading features, perfor-mance, reliability, and scalability capabilities, these products enablenetwork convergence and secure network infrastructures to supportadvanced data, voice, and video applications. The complete Brocadeproduct portfolio enables end-to-end networking from the edge to thecore of today's networking infrastructures. The sections in this chapterintroduce you to these products and briefly describe them. For themost current information, visit www.brocade.com > Products and Solu-tions. Choose a product from the drop-down list on the left and thenscroll down to view Data Sheets, FAQs, Technical Briefs, and WhitePapers.

The LAN products described in this chapter are:

• “Core and Aggregation” on page 110

• “Access” on page 112

• “Brocade IronView Network Manager” on page 115

• “Brocade Mobility” on page 116

For a more detailed discussion of the access, aggregation, and corelayers in the data center network, see “Chapter 6: The New Data Cen-ter LAN” starting on page 69.

109

Page 129: 87652141 the-new-data-center-brocade

Chapter 10: Brocade LAN Network Solutions

Core and AggregationThe network core is the nucleus of the data center LAN. In a three-tiermodel, the core also provides connectivity to the external corporatenetwork, intranet, and Internet. At the aggregation layer, uplinks frommultiple access-layer switches are further consolidated into fewerhigh-availability and high-performance switches.

For application delivery and control, see also “Brocade ServerIronADX” on page 95.

Brocade NetIron MLX SeriesThe Brocade NetIron MLX Series of switching routers is designed toprovide the right mix of functionality and high performance. whilereducing TCO in the data center. Built with the Brocade state-of-the-art,fifth-generation, network-processor-based architecture and Terabit-scale switch fabrics, the NetIron MLX Series offers network planners arich set of high-performance IPv4, IPv6, MPLS, and Multi-VRF capabili-ties as well as advanced Layer 2 switching capabilities.

The NetIron MLX Series includes the 4-slot NetIron MLX-4, 8-slotNetIron MLX-8, 16-slot NetIron MLX-16, and the 32-slot NetIron MLX-32. The series offers industry-leading port capacity and density withup to 256 x 10 GbE, 1536 x 1 GbE, 64 x OC-192, or 256 x OC-48 portsin a single system.

Figure 62. Brocade NetIron MLX-4.

110 The New Data Center

Page 130: 87652141 the-new-data-center-brocade

Core and Aggregation

Brocade BigIron RX SeriesThe Brocade BigIron RX Series of switches provides the first 2.2 billionpacket-per-second device that scale cost-effectively from the enter-prise edge to the core with hardware-based IP routing to 512,000 IProutes per line module. The high-availability design features redundantand hot-pluggable hardware, hitless software upgrades, and gracefulBGP and OSPF restart.

The BigIron RX Series of Layer 2/3 Ethernet switches enables networkdesigners to deploy an Ethernet infrastructure that addresses today'srequirements with a scalable and future-ready architecture that willsupport network growth and evolution for years to come. BigIron RXSeries incorporates the latest advances in switch architecture, systemresilience, QoS, and switch security in a family of modular chassis, set-ting leading industry benchmarks for price performance, scalabilityand TCO.

Figure 63. Brocade BigIron RX-16.

The New Data Center 111

Page 131: 87652141 the-new-data-center-brocade

Chapter 10: Brocade LAN Network Solutions

AccessThe access layer provides the direct network connection to applicationand file servers. Servers are typically provisioned with two or more GbEor 10 GbE network ports for redundant connectivity. Server platformsvary from standalone servers to 1U rack-mount servers and bladeservers with passthrough cabling or bladed Ethernet switches.

Brocade TurboIron 24X SwitchThe Brocade TurboIron 24X switch is a compact, high-performance,high-availability, and high-density 10/1 GbE dual-speed solution thatmeets mission-critical data center ToR and High-Performance ClusterComputing (HPCC) requirements. An ultra-low-latency, cut-through,non-blocking architecture and low power consumption help provide acost-effective solution for server or compute-node connectivity.

Additional highlights include:

• Highly efficient power and cooling with front-to-back airflow, auto-matic fan speed adjustment, and use of SFP+ and direct attachedSFP+ copper (Twinax)

• High availability with redundant, load-sharing, hot-swappable,auto-sensing/switching power supplies and triple-fan assembly

• End-to-end QoS with hardware-based marking, queuing, and con-gestion management

• Embedded per-port sFlow capabilities to support scalable hard-ware-based traffic monitoring

• Wire-speed performance with an ultra-low-latency, cut-through,non-blocking architecture ideal for HPC, iSCSI storage, real-timeapplication environments

Figure 64. Brocade TurboIron 24X Switch.

112 The New Data Center

Page 132: 87652141 the-new-data-center-brocade

Access

Brocade FastIron CX SeriesThe Brocade FastIron CX Series of switches provides new levels of per-formance, scalability, and flexibility required for today's enterprisenetworks. With advanced capabilities, these switches deliver perfor-mance and intelligence to the network edge in a flexible 1U formfactor, which helps reduce infrastructure and administrative costs.

Designed for wire-speed and non-blocking performance, FastIron CXswitches include 24- and 48-port models, in both Power over Ethernet(PoE) and non-PoE versions. Utilizing built-in 16 Gbps stacking portsand Brocade IronStack technology, organizations can stack up to eightswitches into a single logical switch with up to 384 ports. PoE modelssupport the emerging Power over Ethernet Plus (PoE+) standard todeliver up to 30 watts of power to edge devices, enabling next-genera-tion campus applications.

Figure 65. Brocade FastIron CX-624S-HPOE Switch.

Brocade NetIron CES 2000 SeriesWhether they are located at a central office or remote site, the avail-ability of space often determines the feasibility of deploying newequipment and services in a data center environment. The BrocadeNetIron Compact Ethernet Switch (CES) 2000 Series is purpose-builtto provide flexible, resilient, secure, and advanced Ethernet and MPLS-based services in a compact form factor.

The NetIron CES 2000 Series is a family of compact 1U, multiserviceedge/aggregation switches that combine powerful capabilities withhigh performance and availability. The switches provide a broad set ofadvanced Layer 2, IPv4, and MPLS capabilities in the same device. Asa result, they support a diverse set of applications in data center, andlarge enterprise networks.

The New Data Center 113

Page 133: 87652141 the-new-data-center-brocade

Chapter 10: Brocade LAN Network Solutions

Figure 66. Brocade NetIron CES 2000 switches, 24- and 48-port con-figurations in both Hybrid Fiber (HF) and RJ45 versions.

Brocade FastIron Edge X SeriesThe Brocade FastIron Edge X Series switches are high-performancedata center-class switches that provide Gigabit copper and fiber-opticconnectivity and 10 GbE uplinks. Advanced Layer 3 routing capabilitiesand full IPv6 support are designed for the most demandingenvironments.

FastIron Edge X Series offers a diverse range of switches that meetLayer 2/3 edge, aggregation, or small-network backbone-connectivityrequirements with intelligent network services, including superior QoS,predictable performance, advanced security, comprehensive manage-ment, and integrated resiliency. It is the ideal networking platform todeliver 10 GbE.

Figure 67. Brocade FastIron Edge X 624.

114 The New Data Center

Page 134: 87652141 the-new-data-center-brocade

Brocade IronView Network Manager

Brocade IronView Network Manager Brocade IronView Network Manager (INM) provides a comprehensivetool for configuring, managing, monitoring, and securing Brocadewired and wireless network products. It is an intelligent network man-agement solution that reduces the complexity of changing, monitoring,and managing network-wide features such as Access Control Lists(ACLs), rate limiting policies, VLANs, software and configurationupdates, and network alarms and events.

Using Brocade INM, organizations can automatically discover Brocadenetwork equipment and immediately acquire, view, and archive config-urations for each device. In addition, they can easily configure anddeploy policies for wired and wireless products.

Figure 68. Brocade INM Dashboard (top) and Backup Configuration Manager (bottom).

The New Data Center 115

Page 135: 87652141 the-new-data-center-brocade

Chapter 10: Brocade LAN Network Solutions

Brocade MobilityWhile once considered a luxury, Wi-Fi connectivity is now an integralpart of the modern enterprise. To that end, most IT organizations aredeploying Wireless LANs (WLANs). With the introduction of the IEEE802.11n standard, these organizations can save significant capitaland feel confident in expanding their wireless deployments to busi-ness-critical applications. In fact, wireless technologies often matchthe performance of wired networks-all with simplified deployment,robust security, and at a significantly lower cost. Brocade offers all thepieces to deploy a wireless enterprise. In addition to indoor networkingequipment, Brocade also provides the tools to wirelessly connect mul-tiple buildings across a corporate campus.

Brocade offers two models of controllers: the Brocade RFS6000 andRFS7000 Controller. Brocade Mobility controllers enables wirelessenterprises by providing an integrated communications platform thatdelivers secure and reliable voice, video, and data applications inWireless LAN (WLAN) environments. Based on an innovative architec-ture, Brocade mobility controllers provide:

• Wired and wireless networking services

• Multiple locationing technologies such as Wi-i and RFID

• Resiliency via 3G/4G wireless broadband backhaul

• High performance with 802.11n networks

The Brocade Mobility RFS7000 features a multicore, multithreadedarchitecture designed for large-scale, high-bandwidth enterprisedeployments. It easily handles from 8000 to 96,000 mobile devicesand 256 to 3000 802.11 dual-radio a/b/g/n access points or 1024adaptive access points (Brocade Mobility 5181 a/b/g or BrocadeMobility 7131 a/b/g/n) per controller. The Brocade Mobility RFS7000provides the investment protection enterprises require: innovativeclustering technology provides a 12X capacity increase, and smartlicensing enables efficient, scalable network expansion.

116 The New Data Center

Page 136: 87652141 the-new-data-center-brocade

The New Data Center

11

Brocade OneSimplifying complexity in the virtualized data center

Brocade One, announced in mid-2010, is the unifying network archi-tecture and strategy that enables customers to simplify the complexityof virtualizing their applications. By removing network layers, simplify-ing management, and protecting existing technology investments,Brocade One helps customers migrate to a world where informationand services are available anywhere in the cloud.

Evolution not RevolutionIn the data center, Brocade shares a common industry view that ITinfrastructures will eventually evolve to a highly virtualized, services-on-demand state enabled through the cloud. The process, an evolu-tionary path toward this desired end-state, is as important as reachingthe end-state. This evolution has already started inside the data centerand Brocade offers insights on the challenges faced as it moves out tothe rest of the network.

The realization of this vision requires radically simplified network archi-tectures. This is best achieved through a deep understanding of datacenter networking intricacies and the rejection of rip-and-replacedeployment scenarios with vertically integrated stacks sourced from asingle vendor. In contrast, the Brocade One architecture takes a cus-tomer-centric approach with the following commitments:

• Unmatched simplicity. Dramatically simplifying the design, deploy-ment, configuration, and ongoing support of IT infrastructures.

• Investment protection. Emphasizing an approach that builds onexisting customer multivendor infrastructures while improvingtheir total cost of ownership.

117

Page 137: 87652141 the-new-data-center-brocade

Chapter 11: Brocade One

• High-availability networking. Supporting the ever-increasingrequirements for unparalleled uptime by setting the standard forcontinuous operations, ease of management, and resiliency.

• Optimized applications. Optimizing current and future customerapplications.

The new Brocade converged fabric solutions include unique and pow-erful innovations customized to support virtualized data centers,including:

• Brocade Virtual Cluster Switching™ (VCS). A new class of Brocade-developed technologies designed to address the unique require-ments of virtualized data centers. Brocade VCS, available inshipping product in late 2010, overcomes the limitations of con-ventional Ethernet networking by applying non-stop operations,any-to-any connectivity and the intelligence of fabric switching.

Figure 69. The pillars of Brocade VCS (detailed in the next section).

• Brocade Virtual Access Layer (VAL). A logical layer between Bro-cade converged fabric and server virtualization hypervisors thatwill help ensure a consistent interface and set of services for vir-tual machines (VMs) connected to the network. Brocade VAL isdesigned to be vendor agnostic and will support all major hypervi-sors by utilizing industry-standard technologies, including theemerging Virtual Ethernet Port Aggregator (VEPA) and VirtualEthernet Bridging (VEB) standards.

Dynamic Services

EthernetFabric

LogicalChassis

DistributedIntelligence

Brocade Virtual ClusterSwitching (VCS)

No STP

Multi-path, deterministic

Auto-healing, non-disruptive

Lossless, low latency

Convergence ready

Logically flattens and collapses network layers

Scale edge and manage as if single switch

Auto-configuration

Centralized or distributed management, end-to-end

Self-forming

Arbitrary topology

Network aware of allmembers, devices, VMs

Masterless control, noreconfiguration

VAL interaction

Connectivity over distance, Native Fibre Channel, Security Services, Layer 4 - 7, and so on

118 The New Data Center

Page 138: 87652141 the-new-data-center-brocade

Industry's First Converged Data Center Fabric

• Brocade Open Virtual Compute Blocks. Brocade is working withleading systems and IT infrastructure vendors to build tested andverified data center blueprints for highly scalable and cost-effec-tive deployment of VMs on converged fabrics.

• Brocade Network Advisor. A best-in-class element managementtoolset that will help provide industry-standard and customizedsupport for industry-leading network management, storage man-agement, virtualization management, and data centerorchestration tools.

• Multiprotocol Support. Brocade converged fabrics are designed totransport all types of network and storage traffic over a single wireto reduce complexity and help ensure a simplified migration pathfrom current technologies.

Industry's First Converged Data Center FabricBrocade designed VCS as the core technology for building large, high-performance and flat Layer 2 data center fabrics to better support theincreased adoption of server virtualization. Brocade VCS is built onData Center Bridging technologies to meet the increased network reli-ability and performance requirements as customers deploy more andmore VMs. Brocade helped pioneer DCB through industry standardsbodies to ensure that the technology would be suitable for the rigors ofdata center networking.

Another key technology in Brocade VCS is the emerging IETF standardTransparent Interconnection of Lots of Links (TRILL), which will providea more efficient way of moving data throughout converged fabrics byautomatically determining the shortest path between routes. Both DCBand TRILL are advances to current technologies and are critical forbuilding large, flat, and efficient converged fabrics capable of support-ing both Ethernet and storage traffic. They are also examples of howBrocade has been able to leverage decades of experience in buildingdata center fabrics to deliver the industry's first converged fabrics.

Brocade VCS also simplifies the management of Brocade convergedfabrics by managing multiple discrete switches as one logical entity.These VCS features allow customers to flatten network architecturesinto a single Layer 2 domain that can be managed as a single switch.This reduces network complexity and operational costs while allowingVCS users to scale their VM environments to global topologies.

The New Data Center 119

Page 139: 87652141 the-new-data-center-brocade

Chapter 11: Brocade One

Ethernet FabricIn the new data center LAN, Spanning Tree Protocol is no longer neces-sary, because the Ethernet fabric appears as a single logical switch toconnected servers, devices, and the rest of the network. Also, Multi-Chassis Trunking (MCT) capabilities in aggregation switches enable alogical one-to-one relationship between the access (VCS) and aggrega-tion layers of the network. The Ethernet fabric is an advanced multi-path network utilizing TRILL, in which all paths in the network areactive and traffic is automatically distributed across the equal-costpaths. In this optimized environment, traffic automatically takes theshortest path for minimum latency without manual configuration.

And, unlike switch stacking technologies, the Ethernet fabric is master-less. This means that no single switch stores configuration informationor controls fabric operations. Events such as added, removed, or failedlinks are not disruptive to the Ethernet fabric and do not require alltraffic in the fabric to stop. If a single link fails, traffic is automaticallyrerouted to other available paths in less than a second. Moreover, sin-gle component failures do not require the entire fabric topology toreconverge, helping to ensure that no traffic is negatively impacted byan isolated issue.

Distributed IntelligenceBrocade VCS also enhances server virtualization with technologiesthat increase VM visibility in the network and enable seamless migra-tion of policies along with the VM. VCS achieves this through adistributed services architecture that makes the fabric aware of all ofconnected devices and shares the information across those devices.Automatic Migration of Port Profiles (AMPP), a VCS feature, enables aVM's network profiles—such as security or QoS levels—to follow the VMduring migrations without manual intervention. This unprecedentedlevel of VM visibility and automated profile management helps intelli-gently remove the physical barriers to VM mobility that exists in currenttechnologies and network architectures.

Distributed intelligence allows the Ethernet fabric to be “self-forming.”When two VCS-enabled switches are connected, the fabric is automati-cally created, and the switches discover the common fabricconfiguration. Scaling bandwidth in the fabric is as simple as connect-ing another link between switches or adding a new switch as required.

120 The New Data Center

Page 140: 87652141 the-new-data-center-brocade

Industry's First Converged Data Center Fabric

The Ethernet fabric does not dictate a specific topology, so it does notrestrict oversubscription ratios. As a result, network architects can cre-ate a topology that best meets specific application requirements.Unlike other technologies, VCS enables different end-to-end subscrip-tion ratios to be created or fine- tuned as application demands changeover time.

Logical ChassisAll switches in an Ethernet fabric are managed as if they were a singlelogical chassis. To the rest of the network, the fabric looks no differentthan any other Layer 2 switch. The network sees the fabric as a singleswitch, whether the fabric contains as few as 48 ports or thousands ofports. Each physical switch in the fabric is managed as if it were a portmodule in a chassis. This enables fabric scalability without manualconfiguration. When a port module is added to a chassis, the moduledoes not need to be configured, and a switch can be added to theEthernet fabric just as easily. When a VCS-enabled switch is connectedto the fabric, it inherits the configuration of the fabric and the newports become available immediately.

The logical chassis capability significantly reduces management ofsmall-form-factor edge switches. Instead of managing each top-of-rackswitch (or switches in blade server chassis) individually, organizationscan manage them as one logical chassis, which further optimizes thenetwork in the virtualized data center and will further enable a cloudcomputing model.

Dynamic ServicesBrocade VCS also offers dynamic services so that you can add newnetwork and fabric services to Brocade converged fabrics, includingcapabilities such as fabric extension over distance, application deliv-ery, native Fibre Channel connectivity, and enhanced security servicessuch as firewalls and data encryption. Through VCS, the new switchesand software with these services behave as service modules within alogical chassis. Furthermore, the new services are then made avail-able to the entire converged fabric, dynamically evolving the fabric withnew functionality. Switches with these unique capabilities can join theEthernet fabric, adding a network service layer across the entire fabric.

The New Data Center 121

Page 141: 87652141 the-new-data-center-brocade

Chapter 11: Brocade One

The VCS ArchitectureThe VCS architecture, shown in Figure 70, flattens the network by col-lapsing the traditional access and aggregation layers. Since the fabricis self-aggregating, there is no need for aggregation switches to man-age subscription ratios and provide server-to-server communication.For maximum flexibility of server and storage connectivity, multipleprotocols and speeds are supported: 1 GbE, 10 GbE, 10 GbE with DCB,and Fibre Channel. Since the Ethernet fabric is one logical chassis withdistributed intelligence, the VM sphere of mobility spans the entireVCS. Mobility extends even further with the VCS fabric extensionDynamic Service. At the core of the data center, routers are virtualizedusing MCT and provide high-performance connectivity between Ether-net fabrics, inside the data center or across data centers.

Servers running high-priority applications or other servers requiringthe highest block storage service levels connect to the SAN usingnative Fibre Channel. For lower-tier applications, FCoE or iSCSI storagecan be connected directly to the Ethernet fabric, providing shared stor-age for servers connected to that fabric.

Figure 70. A Brocade VCS reference network architecture.

Dedicated FibreChannel SAN for

Tier 1 applications

VCS fabricextension

Bladeservers

Rack-mountservers

FC/FCoE/iSCSI/NAS

storage

SAN

VCS

Remotedata center

VM

VM

VM

VMVM

VM

VM

VM

VM

VMVM

VM

VCSSecurity ervices

(firewall, encryption)

Layer 4–7application delivery

VM VMVM

VM

VCS fabricextension

VM

VM

VMVM

Corerouters

PublicNetwork

122 The New Data Center

Page 142: 87652141 the-new-data-center-brocade

The New Data Center

A

“Best Practices for EnergyEfficient Storage Operations”Version 1.0

October 2008

Authored by Tom Clark, Brocade, Green Storage Initiative (GSI) Chairand Dr. Alan Yoder, NetApp, GSI Governing Board

Reprinted with permission of the SNIA

IntroductionThe energy required to support data center IT operations is becominga central concern worldwide. For some data centers, additional energysupply is simply not available, either due to finite power generationcapacity in certain regions or the inability of the power distribution gridto accommodate more lines. Even if energy is available, it comes at anever increasing cost. With current pricing, the cost of powering ITequipment is often higher than the original cost of the equipmentitself. The increasing scarcity and higher cost of energy, however, isbeing accompanied by a sustained growth of applications and data.Simply throwing more hardware assets at the problem is no longer via-ble. More hardware means more energy consumption, more heatgeneration and increasing load on the data center cooling system.Companies are therefore now seeking ways to accommodate datagrowth while reducing their overall power profile. This is a difficultchallenge.

Data center energy efficiency solutions span the spectrum from moreefficient rack placement and alternative cooling methods to serverand storage virtualization technologies. The SNIA's Green Storage Ini-tiative was formed to identify and promote energy efficiency solutionsspecifically relating to data storage. This document is the first iterationof the SNIA GASSY's recommendations for maximizing utilization of

123

Page 143: 87652141 the-new-data-center-brocade

Appendix A: “Best Practices for Energy Efficient Storage Operations”

data center storage assets while reducing overall power consumption.We plan to expand and update the content over time to include newenergy-related storage technologies as well as SNIA-generated metricsfor evaluating energy efficiency in storage product selection.

Some Fundamental ConsiderationsReducing energy consumption is both an economic and a social imper-ative. While data centers represent only ~2% of total energyconsumption in the US, the dollar figure is approximately $4B annu-ally. In terms of power generation, data centers in the US require theequivalent of six 1000 MegaWatt power plants to sustain current oper-ations. Global power consumption for data centers is more than twicethe US figures. The inability of the power generation and delivery infra-structure to accommodate the growth in continued demand, however,means that most data centers will be facing power restrictions in thecoming years. Gartner predicts that by 2009, half of the world's data

centers will not have sufficient power to support their applications1.An Emerson Power survey projects that 96% of all data centers will not

have sufficient power by 2011.2 Even if there was a national campaignto build alternative energy generation capability, new systems wouldnot be online soon enough to prevent a widespread energy deficit. Thissimply highlights the importance of finding new ways to leverage tech-nology to increase energy efficiency within the data center andaccomplish more IT processing with fewer energy resources.

In addition to the pending scarcity and increased cost of energy topower IT operations, data center managers face a continued explosionin data growth. Since 2000, the amount of corporate data generatedworldwide has grown from 5 exabytes (5 billion gigabytes) to over 300exabytes, with projections of ~1 zetabyte (1000 exabytes) by 2010.This data must be stored somewhere. The sustained growth of datarequires new tools for data management, storage allocation, dataretention and data redundancy.

1. “Gartner Says 50 Percent of Data Centers Will Have Insufficient Power and Cooling Capacity by 2008,” Gartner Inc. Press Release, November 29, 2006

2. “Emerson Network Power Presents Industry Survey Results That Project 96 Percent of Today`s Data Centers Will Run Out of Capacity by 2011" Emerson Press Release, November 16, 2006

124 The New Data Center

Page 144: 87652141 the-new-data-center-brocade

Appendix A: “Best Practices for Energy Efficient Storage Operations”

The conflict between the available supply of energy to power IT opera-tions and the increasing demand imposed by data growth is furtherexacerbated by the operational requirement for high availability accessto applications and data. Mission-critical applications in particular arehigh energy consumers and require more powerful processors, redun-dant servers for failover, redundant networking connectivity,redundant fabric pathing, and redundant data storage in the form ofmirroring and data replication for disaster recovery. These top tierapplications are so essential for business operations, however, thatthe doubling of server and storage hardware elements and the accom-panying doubling of energy draw have been largely unavoidable. Heretoo, though, new green storage technologies and best practices canassist in retaining high availability of applications and data whilereducing total energy requirements.

Shades of GreenThe quandary for data center managers is in identifying which newtechnologies will actually have a sustainable impact for increasingenergy efficiency and which are only transient patches whose initialenergy benefit quickly dissipates as data center requirements change.Unfortunately, the standard market dynamic that eventually separatesweak products from viable ones has not had sufficient time to elimi-nate the green pretenders. Consequently, analysts often complainabout the 'greenwashing' of vendor marketing campaigns and theopportunistic attempt to portray marginally useful solutions as thecure to all the IT manager's energy ills.

Within the broader green environmental movement greenwashing isalso known as being “lite green” or sometimes “light green”. Thereare, however, other shades of green. Dark green refers to environmen-tal solutions that rely on across-the-board reductions in energy andmaterial consumption. For a data center, a dark green tactic would beto simply reduce the number of applications and associated hardwareand halt the expansion of data growth. Simply cutting back, however, isnot feasible for today's business operations. To remain competitive,businesses must be able to accommodate growth and expansion ofoperations.

Consequently, viable energy efficiency for ongoing data center opera-tions must be based on solutions that are able to leverage state-of-the-art technologies to do much more with much less. This aligns to yetanother shade of environmental green known as “bright green”. Brightgreen solutions reject both the superficial lite green and the Ludditedark green approaches to the environment and rely instead on techni-

The New Data Center 125

Page 145: 87652141 the-new-data-center-brocade

Appendix A: “Best Practices for Energy Efficient Storage Operations”

cal innovation to provide sustainable productivity and growth whilesteadily driving down energy consumption. The following SNIA GSI bestpractices include many bright green solutions that accomplish the goalof energy reduction while increasing productivity of IT storageoperations.

Although the Best Practices recommendations listed below are num-bered sequentially, no prioritization is implied. Every data centeroperation has different characteristics and what is suitable for oneapplication environment may not work in another.

These recommendations collectively fall into the category of “silverbuckshot” in addressing data center storage issues. There is no singlesilver bullet to dramatically reduce IT energy consumption and cost.Instead, multiple energy efficient technologies can be deployed in con-cert to reduce the overall energy footprint and bring costs undercontrol. Thin provisioning and data deduplication, for example, are dis-tinctly different technologies that together can help reduce theamount of storage capacity required to support applications and thusthe amount of energy-consuming hardware in the data center. Whenevaluating specific solutions, then, it is useful to imagine how they willwork in concert with other products to achieve greater efficiencies.

Best Practice #1: Manage Your DataA significant component of the exponential growth of data is thegrowth of redundant copies of data. By some industry estimates, overhalf of the total volume of a typical company's data exists in the formof redundant copies dispersed across multiple storage systems andclient workstations. Consider the impact, for example, of emailing a4MB PowerPoint attachment to 100 users instead of simply sending alink to the file. The corporate email servers now have an additional400 MB of capacity devoted to redundant copies of the same data.Even if individual users copy the attachment to their local drives, theoriginal email and attachment may languish on the email server formonths before the user tidies their Inbox. In addition, some users maycopy the attachment to their individual share on a data center fileserver, further compounding the duplication. And to make mattersworse, the lack of data retention policies can result in duplicate copiesof data being maintained and backed up indefinitely.

This phenomenon is replicated daily across companies of every sizeworldwide, resulting in ever increasing requirements for storage, lon-ger backup windows and higher energy costs. A corporate policy fordata management, redundancy and retention is therefore an essentialfirst step in managing data growth and getting storage energy costs

126 The New Data Center

Page 146: 87652141 the-new-data-center-brocade

Appendix A: “Best Practices for Energy Efficient Storage Operations”

under control. Many companies lack data management policies oreffective means to enforce them because they are already over-whelmed with the consequences of prior data avalanches. Respondingreactively to the problem, however, typically results in the spontaneousacquisition of more storage capacity, longer backup cycles and moreenergy consumption. To proactively deal with data growth, begin withan audit of your existing applications and data and begin prioritizingdata in terms of its business value.

Although tools are available to help identify and reduce data redun-dancy throughout the network, the primary outcome of a data auditshould be to change corporate behavior. Are data sets periodicallyreviewed to ensure that only information that is relevant to business isretained? Does your company have a data retention policy and mecha-nisms to enforce it? Are you educating your users on the importance ofmanaging their data and deleting non-essential or redundant copies offiles? Are your Service Level Agreements (SLAs) structured to rewardmore efficient data management by individual departments? Giventhat data generators (i.e., end users) typically do not understandwhere their data resides or what resources are required to support it,creating policies for data management and retention can be a usefulmeans to educate end users about the consequences of excessivedata redundancy.

Proactively managing data also requires aligning specific applicationsand their data to the appropriate class of storage. Without a logical pri-oritization of applications in terms of business value, all applicationsand data receive the same high level of service. Most applications,however, are not truly mission-critical and do not require the moreexpensive storage infrastructure needed for high availability and per-formance. In addition, even high-value data does not typically sustainits value over time. As we will see in the recommendations below,aligning applications and data to the appropriate storage tier andmigrating data from one tier to another as its value changes canreduce both the cost of storage and the cost of energy to drive it. Thisis especially true when SLAs are structured to require fewer backupcopies as data value declines.

The New Data Center 127

Page 147: 87652141 the-new-data-center-brocade

Appendix A: “Best Practices for Energy Efficient Storage Operations”

Best Practice #2: Select the Appropriate Storage RAID LevelStorage networking provides multiple levels of data protection, rangingfrom simple CRC checks on data frames to more sophisticated datarecovery mechanisms such as RAID. RAID guards against catastrophicloss of data when disk drives fail by creating redundant copies of dataor providing parity reconstruction of data onto spare disks.

RAID 1 mirroring creates a duplicate copy of disk data, but at theexpense of doubling the number of disk drives and consequently dou-bling the power consumption of the storage infrastructure. The primaryadvantage of RAID 1 is that it can withstand the failure of one or all ofthe disks in one mirror of a given RAID set. For some mission-criticalenvironments, the extra cost and power usage characteristic of RAID 1may be unavoidable. Accessibility to data is sometimes so essential forbusiness operations that the ability to quickly switch from primary stor-age to its mirror without any RAID reconstruct penalty is an absolutebusiness requirement. Likewise, asynchronous and synchronous datareplication provide redundant copies of disk data for high availabilityaccess and are widely deployed as insurance against system or sitefailure. As shown in Best Practices #1, however, not all data is missioncritical and even high value data may decrease in value over time. It istherefore essential to determine what applications and data are abso-lutely required for continuous business operations and thus meritmore expensive and less energy efficient RAID protection.

RAID 5's distributed parity algorithm enables a RAID set to withstandthe loss of a single disk drive in a RAID set. In that respect, it offers thebasic data protection against disk failure that RAID 1 provides, butonly against a single disk failure and with no immediate failover to amirrored array. While the RAID set does remain online, a failed diskmust be reconstructed from the distributed parity on the survivingdrives in the set, possibly impacting performance. Unlike RAID 1, how-ever, RAID 5 only requires one spare drive in a RAID set. Fewerredundant drives means less energy consumption as well as better uti-lization of raw capacity.

By adding two additional drives, RAID 6 can withstand the loss of twodisk drives in a RAID set, providing a higher availability than RAID 5.Both solutions, however, are more energy efficient than RAID 1 mirror-ing (or RAID 1+0 mirroring and striping) and should be considered forapplications that do not require an immediate failover to a secondaryarray.

128 The New Data Center

Page 148: 87652141 the-new-data-center-brocade

Appendix A: “Best Practices for Energy Efficient Storage Operations”

Figure 1. Software Technologies for Green Storage, © 2008 StorageNetworking Industry Association, All Rights Reserved, Alan Yoder,NetApp

As shown in Figure 1, the selection of the appropriate RAID levels toretain high availability data access while reducing the storage hard-ware footprint can enable incremental green benefits when combinedwith other technologies.

Best Practice #3: Leverage Storage VirtualizationStorage virtualization refers to a suite of technologies that create a log-ical abstraction layer above the physical storage layer. Instead ofmanaging individual physical storage arrays, for example, virtualizationenables administrators to manage multiple storage systems as a sin-gle logical pool of capacity, as shown in Figure 2.

 

RAID 5/6 ThinProvisioning

VirtualClones

Dedupe&

Compression

1 TB

5 TB

10 TB

Data

RAID10

“Growth”

Snapshots

Data

RAID10

“Growth”Snapshots

Backup

Archive

Test

Test

Test

Test

Test

DataRAIDDP“Growth”

Snapshots

DataRAID DP

“Growth”Snapshots

Backup

Archive

Test

Test

Test

Test

Test

- Green technologies use less raw capacity to store and use the same data set

- Power consumption falls accordingly

DataRAIDDP“Growth”

Snapshots

DataRAID DP“Growth”

Snapshots

Backup

Archive

Test

Test

Test

Test

Test

DataRAIDDP“Growth”

Snapshots

DataRAID DP“Growth”

Snapshots

Backup

Archive

Test

Test

Test

Test

Test

DataRAIDDP“Growth”

Snapshots

DataRAID DP“Growth”

Snapshots

BackupArchive

Test

Test

Test

Test

Test

DataRAIDDP“Growth”

Snapshots

DataRAID DP“Growth”

SnapshotsBackupArchive

Multi-Use

Backups

RAID 5/6 ThinProvisioning

VirtualClones

Dedupe&

Compression

1 TB

5 TB

10 TB

Data

RAID10

“Growth”

Snapshots

Data

RAID10

“Growth”Snapshots

Backup

Archive

Test

Test

Test

Test

Test

Data

RAID10

“Growth”

Snapshots

Data

RAID10

“Growth”Snapshots

Backup

Archive

Test

Test

Test

Test

Test

DataRAIDDP“Growth”

Snapshots

DataRAID DP

“Growth”Snapshots

Backup

Archive

Test

Test

Test

Test

Test

- Green technologies use less raw capacity to store and use the same data set

- Power consumption falls accordingly

DataRAIDDP“Growth”

Snapshots

DataRAID DP“Growth”

Snapshots

Backup

Archive

Test

Test

Test

Test

Test

DataRAIDDP“Growth”

Snapshots

DataRAID DP“Growth”

Snapshots

Backup

Archive

Test

Test

Test

Test

Test

DataRAIDDP“Growth”

Snapshots

DataRAID DP“Growth”

Snapshots

BackupArchive

Test

Test

Test

Test

Test

DataRAIDDP“Growth”

Snapshots

DataRAID DP“Growth”

SnapshotsBackupArchive

Multi-Use

Backups

The New Data Center 129

Page 149: 87652141 the-new-data-center-brocade

Appendix A: “Best Practices for Energy Efficient Storage Operations”

Figure 2. Storage Virtualization: Technologies for Simplifying Data Stor-age and Management, T. Clark, Addison-Wesley, used with permissionfrom the author

On its own, storage virtualization is not inherently more energy effi-cient than conventional storage management but can be used tomaximize efficient capacity utilization and thus slow the growth ofhardware acquisition. By combining dispersed capacity into a singlelogical pool, it is now possible to allocate additional storage toresource-starved applications without having to deploy new energy-consuming hardware. Storage virtualization is also an enabling foun-dation technology for thin provisioning, resizeable volumes, snapshotsand other solutions that contribute to more energy efficient storageoperations.

Best Practice #4: Use Data Compression Compression has long been used in data communications to minimizethe number of bits sent along a transmission link and in some storagetechnologies to reduce the amount of data that must be stored.Depending on implementation, compression can impose a perfor-mance penalty because the data must be encoded when written anddecoded (decompressed) when read. Simply minimizing redundant orrecurring bit patterns via compression, however, can reduce theamount of processed data that is stored by one half or more and thusreduce the amount of total storage capacity and hardware required.

Not all data is compressible, though, and some data formats havealready undergone compression at the application layer. JPEG, MPEGand MP3 file formats, for example, are already compressed and willnot benefit from further compression algorithms when written to diskor tape.

 

physical storage

SAN

LUN 8 LUN 1 LUN 43 LUN 22 LUN 5 LUN 55

Array A Array B Array C

Server 1 Server 2 Server 3

physical storage

SAN

LUN 8 LUN 1 LUN 43 LUN 22 LUN 5 LUN 55

Array A Array B Array C

Server 1 Server 2 Server 3

virtualized storage pool

physical storage

LUN 1 LUN 2 LUN 3 LUN 4 LUN 5 LUN 6

Array A Array B Array C

Server 1 Server 2 Server 3

LUN 8 LUN 1 LUN 43 LUN 22 LUN 5 LUN 55LUN 2 LUN 12

virtualized storage pool

physical storage

LUN 1 LUN 2 LUN 3 LUN 4 LUN 5 LUN 6

Array A Array B Array C

Server 1 Server 2 Server 3

LUN 8 LUN 1 LUN 43 LUN 22 LUN 5 LUN 55LUN 2 LUN 12

physical storage

SAN

LUN 8 LUN 1 LUN 43 LUN 22 LUN 5 LUN 55

Array A Array B Array C

Server 1 Server 2 Server 3

physical storage

SAN

LUN 8 LUN 1 LUN 43 LUN 22 LUN 5 LUN 55

Array A Array B Array C

Server 1 Server 2 Server 3

virtualized storage pool

physical storage

LUN 1 LUN 2 LUN 3 LUN 4 LUN 5 LUN 6

Array A Array B Array C

Server 1 Server 2 Server 3

LUN 8 LUN 1 LUN 43 LUN 22 LUN 5 LUN 55LUN 2 LUN 12

virtualized storage pool

physical storage

LUN 1 LUN 2 LUN 3 LUN 4 LUN 5 LUN 6

Array A Array B Array C

Server 1 Server 2 Server 3

LUN 8 LUN 1 LUN 43 LUN 22 LUN 5 LUN 55LUN 2 LUN 12

130 The New Data Center

Page 150: 87652141 the-new-data-center-brocade

Appendix A: “Best Practices for Energy Efficient Storage Operations”

When used in combination with security mechanisms such as dataencryption, compression must be executed in the proper sequence.Data should be compressed before encryption on writes anddecrypted before decompression on reads.

Best Practice #5: Incorporate Data DeduplicationWhile data compression works at the bit level, conventional data dedu-plication works at the disk block level. Redundant data blocks areidentified and referenced to a single identical data block via pointersso that the redundant blocks do not have to be maintained intact forbackup (virtual to disk or actual to tape). Multiple copies of a docu-ment, for example, may only have minor changes in different areas ofthe document while the remaining material in the copies have identi-cal content. Data deduplication also works at the block level to reduceredundancy of identical files. By retaining only unique data blocks andproviding pointers for the duplicates, data deduplication can reducestorage requirements by up to 20:1. As with data compression, thedata deduplication engine must reverse the process when data is readso that the proper blocks are supplied to the read request.

Data deduplication may be done either in band, as data is transmittedto the storage medium, or in place, on existing stored data. In bandtechniques have the obvious advantage that multiple copies of datanever get made, and therefore never have to be hunted down andremoved. In place techniques, however, are required to address theimmense volume of already stored data that data center managersmust deal with.

Best Practice #6: File DeduplicationFile deduplication operates at the file system level to reduce redun-dant copies of identical files. Similar to block level data deduplication,the redundant copies must be identified and then referenced viapointers to a single file source. Unlike block level data deduplication,however, file deduplication lacks the granularity to prevent redundancyof file content. If two files are 99% identical in content, both copiesmust be stored in their entirety. File deduplication therefore only pro-vides a 3 or 4 to 1 reduction in data volume in general. Rich targetssuch as full network-based backup of laptops may do much betterthan this, however.

The New Data Center 131

Page 151: 87652141 the-new-data-center-brocade

Appendix A: “Best Practices for Energy Efficient Storage Operations”

Best Practice #7: Thin Provisioning of Storage to ServersIn classic server-storage configurations, servers are allocated storagecapacity based on the anticipated requirements of the applicationsthey support. Because exceeding that storage capacity over timewould result in an application failure, administrators typically over-pro-vision storage to servers. The result of fat provisioning is higher cost,both for the extra storage capacity itself and in the energy required tosupport additional spinning disks that are not actively used for ITprocessing.

Thin provisioning is a means to satisfy the application server's expecta-tion of a certain volume size while actually allocating less physicalcapacity on the storage array or virtualized storage pool. This elimi-nates the under-utilization issues typical of most applications,provides storage on demand and reduces the total disk capacityrequired for operations. Fewer disks equate to lower energy consump-tion and cost and by monitoring storage usage the storageadministrator can add capacity only as required.

Best Practice #8: Leverage Resizeable VolumesAnother approach to increasing capacity utilization and thus reducingthe overall disk storage footprint is to implement variable size vol-umes. Typically, storage volumes are of a fixed size, configured by theadministrator and assigned to specific servers. Dynamic volumes, bycontrast, can expand or contract depending on the amount of datagenerated by an application. Resizeable volumes require support fromthe host operating system and relevant applications, but can increaseefficient capacity utilization to 70% or more. From a green perspective,more efficient use of existing disk capacity means fewer hardwareresources over time and a much better energy profile.

Best Practice #9: Writeable SnapshotsApplication development and testing are integral components of datacenter operations and can require significant increases in storagecapacity to perform simulations and modeling against real data.Instead of allocating additional storage space for complete copies oflive data, snapshot technology can be used to create temporary copiesfor testing. A snapshot of the active, primary data is supplemented bywriting only the data changes incurred by testing. This minimizes theamount of storage space required for testing while allowing the activenon-test applications to continue unimpeded.

132 The New Data Center

Page 152: 87652141 the-new-data-center-brocade

Appendix A: “Best Practices for Energy Efficient Storage Operations”

Best Practice #10: Deploy Tiered StorageStorage systems are typically categorized by their performance, avail-ability and capacity characteristics. Formerly, most application datawas stored on a single class of storage system until it was eventuallyretired to tape for preservation. Today, however, it is possible tomigrate data from one class of storage array to another as the busi-ness value and accessibility requirements of that data changes overtime. Tiered storage is a combination of different classes of storagesystems and data migration tools that enables administrators to alignthe value of data to the value of the storage container in which itresides. Because second-tier storage systems typically use slowerspinning or less expensive disk drives and have fewer high availabilityfeatures, they consume less energy compared to first-tier systems. Inaddition, some larger storage arrays enable customers to deploy bothhigh-performance and moderate-performance disks sets in the samechassis, thus enabling an in-chassis data migration.

A tiered storage strategy can help reduce your overall energy consump-tion while still making less frequently accessed data available toapplications at a lower cost per gigabyte of storage. In addition, tieredstorage is a reinforcing mechanism for data retention policies as datais migrated from one tier to another and then eventually preserved viatape or simply deleted.

Best Practice #11: Solid State StorageSolid state storage still commands a price premium compared tomechanical disk storage, but has excellent performance characteris-tics and much lower energy consumption compared to spinning media.While solid state storage may not be an option for some data centerbudgets, it should be considered for applications requiring high perfor-mance and for tiered storage architectures as a top-tier container.

Best Practice #12: MAID and Slow-Spin Disk TechnologyHigh performance applications typically require continuous access tostorage and thus assume that all disk sets are spinning at full speedand ready to read or write data. For occasional or random access todata, however, the response time may not be as critical. MAID (mas-sive array of idle disks) technology uses a combination of cachememory and idle disks to service requests, only spinning up disks asrequired. Once no further requests for data in a specific disk set aremade, the drives are once again spun down to idle mode. Becauseeach disk drive represents a power draw, MAID provides inherent

The New Data Center 133

Page 153: 87652141 the-new-data-center-brocade

Appendix A: “Best Practices for Energy Efficient Storage Operations”

green benefits. As MAID systems are more accessed more frequently,however, the energy profile begins to approach those of conventionalstorage arrays.

Another approach is to put disk drives into slow spin mode when norequests are pending. Because slower spinning disks require lesspower, the energy efficiency of slow spin arrays is inversely propor-tional to their frequency of access.

Occasionally lengthy access times are inherent to MAID technology, soit is only useful when data access times of several seconds-the lengthof time it takes a disk to spin up-can be tolerated.

Best Practice #13: Tape SubsystemsAs a storage technology, tape is the clear leader in energy efficiency.Once data is written to tape for preservation, the power bill is essen-tially zero. Unfortunately, however, businesses today cannot simply usetape as their primary storage without inciting a revolution among endusers and bringing applications to their knees. Although the obituaryfor tape technology has been written multiple times over the pastdecade, tape endures as a viable archive media. From a green stand-point, tape is still the best option for long term data retention.

Best Practice #14: Fabric DesignFabrics provide the interconnect between servers and storage sys-tems. For larger data centers, fabrics can be quite extensive withthousands of ports in a single configuration. Because each switch ordirector in the fabric contributes to the data center power bill, design-ing an efficient fabric should include the energy and cooling impact aswell as rational distribution of ports to service the storage network.

A mesh design, for example, typically incorporates multiple switchesconnected by interswitch links (ISLs) for redundant pathing. Multiple(sometimes 30 or more) meshed switches represent multiple energyconsumers in the data center. Consequently, consolidating the fabricinto higher port count and more energy efficient director chassis andcore-edge design can help simplify the fabric design and potentiallylower the overall energy impact of the fabric interconnect.

Best Practice #15 - File System VirtualizationBy some industry estimates, 75% of corporate data resides outside ofthe data center, dispersed in remote offices and regional centers. Thispresents a number of issues, including the inability to comply with reg-ulatory requirements for data security and backup, duplication ofserver and storage resources across the enterprise, management andmaintenance of geographically distributed systems and increased

134 The New Data Center

Page 154: 87652141 the-new-data-center-brocade

Appendix A: “Best Practices for Energy Efficient Storage Operations”

energy consumption for corporate-wide IT assets. File system virtual-ization includes several technologies for centralizing and consolidatingremote file data, incorporating that data into data center best prac-tices for security and backup and maintaining local response-time toremote users. From a green perspective, reducing dispersed energyinefficiencies via consolidation helps lower the overall IT energyfootprint.

Best Practice #16: Server, Fabric and Storage VirtualizationData center virtualization leverage virtualization of servers, the fabricand storage to create a more flexible and efficient IT ecosystem.Server virtualization essentially deduplicates processing hardware byenabling a single hardware platform to replace up to 20 platforms.Server virtualization also facilitates mobility of applications so that theproper processing power can be applied to specific applications ondemand. Fabric virtualization enables mobility and more efficient utili-zation of interconnect assets by providing policy-based data flows fromservers to storage. Applications that require first class handling aregiven a higher quality of service delivery while less demanding applica-tion data flows are serviced by less expensive paths. In addition,technologies such as NPIV (N_Port ID Virtualization) reduce the num-ber of switches required to support virtual server connections andemerging technologies such as FCoE (Fibre Channel over Ethernet)can reduce the number of hardware interfaces required to supportboth storage and messaging traffic. Finally, storage virtualization sup-plies the enabling foundation technology for more efficient capacityutilization, snapshots, resizeable volumes and other green storagesolutions. By extending virtualization end-to-end in the data center, ITcan accomplish more with fewer hardware assets and help reducedata center energy consumption.

File system virtualization can also be used as a means of implement-ing tiered storage with transparent impact to users through use of aglobal name space.

Best Practice #17: Flywheel UPS TechnologyFlywheel UPSs, while more expensive up front, are several percentmore efficient (typically > 97%), easier to maintain, more reliable anddo not have the large environmental footprint that conventional bat-tery-backed UPSs do. Forward-looking data center managers areincreasingly finding that this technology is less expensive in multipledimensions over the lifetime of the equipment.

The New Data Center 135

Page 155: 87652141 the-new-data-center-brocade

Appendix A: “Best Practices for Energy Efficient Storage Operations”

Best Practice #18: Data Center Air Conditioning ImprovementsThe combined use of economizers and hot-aisle/cold aisle technologycan result in PUEs of as low as 1.25. As the PUE (Power Usage Effec-tiveness ratio) of a traditional data center is often over 2.25, thisdifference can represent literally millions of dollars a year in energysavings.

Economizers work by using outside air instead of recirculated air whendoing so uses less energy. Obviously climate is a major factor in howeffective this strategy is: heat and high humidity both reduce itseffectiveness.

There are various strategies for hot/cold air containment. All dependon placing rows of racks front to front and back to back. As almost alldata center equipment is designed to draw cooled air in the front andeject heated air out the back, this results in concentrating the areaswhere heat evacuation and cool air supply are located.

One strategy is to isolate only the cold aisles and to run the rest of theroom at hot aisle temperatures. As hot aisle temperatures are typicallyin the 95° F range, this has the advantage that little to no insulation isneeded in the building skin, and in cooler climates, some cooling isgotten via ordinary thermal dissipation through the building skin.

Another strategy is to isolate both hot and cold aisles. This reduces thevolume of air that must be conditioned, and has the advantage thathumans will find the building temperature to be more pleasant.

In general, hot aisle/cold aisle technologies avoid raised floor configu-rations, as pumping cool air upward requires extra energy.

Best Practice #19: Increased Data Center temperaturesIncreasing data center temperatures can save significant amounts ofenergy. Ability to do this is dependent in much part on excellent tem-perature and power monitoring capabilities, and on conditioned aircontainment strategies. Typical enterprise class disk drives are ratedto 55° C (131° F), but disk lifetime suffers somewhat at these highertemperatures, and most data center managers think it unwise to getvery close to that upper limit. Even tightly designed cold aisle contain-ment measures may have 10 to 15 degree variations in temperaturefrom top to bottom of a rack; the total possible variation plus the maxi-mum measured heat gain across the rack must be subtracted fromthe maximum tolerated temperature to get a maximum allowable cold

136 The New Data Center

Page 156: 87652141 the-new-data-center-brocade

Appendix A: “Best Practices for Energy Efficient Storage Operations”

aisle temperature. So the more precisely that air delivery can be con-trolled and measured, the higher the temperature one can run in the“cold” aisles.

Benefits of higher temperatures include raised chiller water tempera-tures and efficiency, reduced fan speed, noise and power draw, andincreased ability to use outside air for cooling through an economizer.

Best Practice #20: Work with Your Regional UtilitiesSome electrical utility companies and state agencies are partneringwith customers by providing financial incentives for deploying moreenergy efficient technologies. If you are planning a new data center orconsolidating an existing one, incentive programs can provide guid-ance for the types of technologies and architectures that will give thebest results.

What the SNIA is Doing About Data Center Energy UsageThe SNIA Green Storage Initiative is conducting a multi-prongedapproach for advancing energy efficient storage networking solutions,including advocacy, promotion of standard metrics, education, devel-opment of energy best practices and alliances with other industryenergy organizations such as The Green Grid. Currently, over 20 SNIAmembers have joined the SNIA GSI as voting members.

A key requirement for customers is the ability to audit their currentenergy consumption and to take practical steps to minimize energyuse. The task of developing metrics for measuring the energy effi-ciency of storage network elements is being performed by the SNIAGreen Storage Technical Work Group (TWG). The SNIA GSI is support-ing the technical work of the GS-TWG by funding laboratory testingrequired for metrics development, formulation of a common taxonomyfor classes of storage and promoting GS-TWG metrics for industrystandardization.

The SNIA encourages all storage networking vendors, channels, tech-nologists and end users to actively participate in the green storageinitiative and help discover additional ways to minimize the impact ofIT storage operations on power consumption. If, as industry analystsforecast, adequate power for many data centers will simply not beavailable, we all have a vital interest in reducing our collective powerrequirements and make our technology do far more with far less envi-ronmental impact.

The New Data Center 137

Page 157: 87652141 the-new-data-center-brocade

Appendix A: “Best Practices for Energy Efficient Storage Operations”

For more information about the SNIA Green Storage Initiative, link to:http://www.snia.org/forums/green/

To view the SNIA GSI Green Tutorials, link to:http://www.snia.org/education/tutorials#green

About the SNIAThe Storage Networking Industry Association (SNIA) is a not-for-profitglobal organization, made up of some 400 member companies and7000 individuals spanning virtually the entire storage industry. SNIA'smission is to lead the storage industry worldwide in developing andpromoting standards, technologies, and educational services toempower organizations in the management of information. To thisend, the SNIA is uniquely committed to delivering standards, educa-tion, and services that will propel open storage networking solutionsinto the broader market. For additional information, visit the SNIA website at www.snia.org.

NOTE: The section, “Green Storage Terminology” has been ommitedfrom this reprint, however, you can find green storage terms in the“Glossary” on page 141.

138 The New Data Center

Page 158: 87652141 the-new-data-center-brocade

The New Data Center

B

Online Sources

ANSI ansi.org

ASHRAE ashrae.com

Blade Systems Alliance bladesystems.org

Brocade brocade.com

Brocade Communities community.brocade.com

Brocade Data Center Virtualization brocade.com/virtualization

Brocade TechBytes brocade.com/techbytes

Climate Savers climatesaverscomputing.org

Data Center Journal datacenterjournal.com

Data Center Knowledge datacenterknowledge.com

Green Storage Initiative snia.org/forums/green

Greener Computing greenercomputing.com

IEEE ieee.org

IETF ietf.org

LEED usgbc.org/DisplayPage.aspx?CMSPageID=222

SNIA snia.org

The Green Grid thegreengrid.org

Uptime Institute uptimeinstitute.org

US Department of Energy - Data Centers www1.eere.energy.gov/industry/saveenergynow/partnering_data_centers.html

139

Page 159: 87652141 the-new-data-center-brocade

Appendix B: Online Sources

140 The New Data Center

Page 160: 87652141 the-new-data-center-brocade

The New Data Center

Glossary

Data center network terminology

ACL Access control list, a security mechanism for assigning various permissions to a network device.

AES256-GCM An IEEE encryption standard for data on tape.AES256-XTS An IEEE encryption standard for data on disk.ANS American National Standards InstituteAPI Application Programming Interface, a set of calling

conventions for program-to-program communication.ASHRAE American Society for Heating, Refrigerating, and Air

Conditioning EngineersASI Application-specific integrated circuit, hardware

designed for specific high-speed functions required by protocol applications such as Fibre Channel and Ethernet.

Access Gateway A Brocade product designed to optimize storage I/O for blade server frames.

Access layer Network switches that provide direct connection to servers or hosts.

Active power The energy consumption of a system when powered on and under normal workload.

Adaptive Networking

Brocade technology that enables proactive changes in network configurations based on defined traffic flows.

Aggregation layer Network switches that provide connectivity between multiple access layer switches and the network backbone or core.

Application server A compute platform optimized for hosting applications for other programs or client access.

ARP spoofing Address Resolution Protocol spoofing, a hacker technique for associating a hacker's Layer 2 (MAC) address with a trusted IP address.

141

Page 161: 87652141 the-new-data-center-brocade

Glossary

Asynchronous Data Replication

For storage, writing the same data to two separate disk arrays based on a buffered scheme that may not capture every data write, typically used for long-distance disaster recovery.

BTU British Thermal Unit, a metric for heat dissipation.Blade server A server architecture that minimizes the number of

components required per blade, while relying on the shared elements (power supply, fans, memory, I/O) of a common frame.

Blanking plates Metal plates used to cover unused portions of equipment racks to enhance air flow.

Bright green Applying new technologies to enhance energy efficiency while maintaining or improving productivity.

CEE Converged Enhanced Ethernet, modifications to conventional 10 Gbps Ethernet to provide the deterministic data delivery associated with Fibre Channel, also known as Data Center Bridging (DCB).

CFC Chlorofluorocarbon, a refrigerant that has been shown to deplete ozone.

Control path In networking, handles configuration and traffic exceptions and is implemented in software. Since it takes more time to handle control path messages, it is often logically separated from the data path to improve performance.

CAN Converged network adapter, a DCB-enabled adapter that supports both FCoE and conventional TCP/IP traffic.

CRAC Computer room air conditioningCore layer Typically high-performance network switches that

provide centralized connectivity for the data center aggregation and access layer switches.

Data compression Bit-level reduction of redundant bit patterns in a data stream via encoding. Typically used for WAN transmissions and archival storage of data to tape.

Data deduplication Block-level reduction of redundant data by replacing duplicate data blocks with pointers to a single good block.

Data path In networking, handles data flowing between devices (servers, clients, storage, and so on). To keep up with increasing speeds, the data path is often implemented in hardware, typical ASICs.

Dark green Addressing energy consumption by the across-the-board reduction of energy consuming activities.

142 The New Data Center

Page 162: 87652141 the-new-data-center-brocade

Glossary

DAS Direct-attached storage, connection of disks or disk arrays directly to servers with no intervening network.

DCB Data Center Bridging, enhancements made to Ethernet LANs for use in data center environments, standards developed by IEEE and IETF.

DCC Device Connection Control, a Brocade SAN security mechanism to allow only authorized devices to connect to a switch.

DCiE Data Center Infrastructure Efficiency, a Green Grid for measuring IT equipment power consumption in relation to total data center power draw.

Distribution layer Typically a tier in the network architecture that routes traffic between LAN segments in the access layer and aggregates access layer traffic to the core layer.

DMTF Distributed Management Task Force, a standards body focused on systems management.

DoS/DDoS Denial of service/Distributed denial of service, a hacking technique to prevent a server from functioning by flooding it with continuous network requests from rogue sources

DWDM Dense wave division multiplexing, a technique for transmitting multiple data streams on a single fiber optic cable by using different wavelengths.

Data center A facility to house computer systems, storage and network operations

ERP Enterprise resource planning, an application that coordinates resources, information and functions of business across the enterprise.

Economizer Equipment used to treat external air to cool a data center or building.

Encryption A technique to encode data into a form that can't be understood so as to secure it from unauthorized access. Often, a key is used to encode and decode the data from its encrypted format.

End of row EoR, provides network connectivity for multiple racks of servers by provisioning a high-availability switch at the end of the equipment rack row.

Energy The capacity of a physical system to do work.Energy efficiency Using less energy to provide an equivalent level of

energy service.Energy Star An EPA program that leverages market dynamics to

foster energy efficiency in product design.Exabyte 1 billion bigabytes

The New Data Center 143

Page 163: 87652141 the-new-data-center-brocade

Glossary

FAIS Fabric Application Interface Standard, an ANSI standard for providing storage virtualization services from a Fibre Channel switch or director.

FCF Fiber Channel forwarder, the function in FCoE that forwards frames between a Fibre Channel fabric and FCoE network.

FCIP Fibre Channel over IP, an IETF specification for encapsulating Fibre Channel frames in TCP/IP, typically used for SAN extension and disaster recovery applications.

FCoE Fibre Channel over Ethernet, an ANSI standard for encapsulating Fibre Channel frames over Converged Enhanced Ethernet (CEE) to simplify server connectivity.

FICON Fibre Connectivity, a Fibre Channel Layer 4 protocol for mapping legacy IBM transport over Fibre Channel, typically used for distance applications.

File deduplication Reduction of file copies by replacing duplicates with pointers to a single original file.

File server A compute platform optimized for providing file-based data to clients over a network.

Five-nines 99.999% availability, or 5.26 minutes of downtime per year.

Flywheel UPS An uninterruptible power supply technology using a balanced flywheel and kinetic energy to provide transitional power.

Gateway In networking, a gateway converts one protocol to another at the same layer of the networking stack.

GbE Gigabit EthernetGigabit (Gb) 1000 megabitsGigabyte (GB) 1000 megabytesGreenwashing A by-product of excessive marketing and ineffective

engineering.GSI Green Storage Initiative, a SNIA initiative to promote

energy efficient storage practices and to define metrics for measuring the power consumption of storage systems and networks.

GSLB Global server load balancing, a Brocade ServerIron ADX feature that enables client requests to be redirected to the most available and higher-performance data center resource.

HBA Host bus adapter, a network interface optimized for storage I/O, typically to a Fibre Channel SAN.

144 The New Data Center

Page 164: 87652141 the-new-data-center-brocade

Glossary

HCFC Hydrochlorofluorocarbon, a refrigerant shown to deplete ozone.

HPC High-Performance Computing, typically supercomputers or computer clusters that provide teraflop (1012 floating point operations) levels of performance.

HVAC Heating, ventilation and air conditioningHot aisle/cold aisle The arrangement of data center equipment racks to

optimize air flow for cooling in alternating rows.Hot-swap The ability to replace a hardware component without

disrupting ongoing operations.Hypervisor Software or firmware that enables multiple instances

of an operating system and applications (for example, VMs) to run on a single hardware platform.

ICL Inter-chassis link, high-performance channels used to connect multiple Brocade DCX/DCX-4S backbone platform chassis in two- or three-chassis configurations.

Idle power The power consumption of a system when powered on but with no active workload.

IEEE Institute of Electrical and Electronics Engineers, a standards body responsible for, among other things, Ethernet standards.

IETF Internet Engineering Task Force, responsible for TCP/IP de facto standards.

IFL Inter-fabric link, a set of Fibre Channel switch ports (Ex_Port on the router and E_Port on the switch) that can route device traffic between independent fabrics.

IFR Inter-fabric routing, an ANSI standard for providing connectivity between separate Fibre Channel SANs without creating an extended flat Layer 2 network.

ILM Information lifecycle management, a technique for migrating storage data from one class of storage system to another based on the current business value of the data.

Initiator A SCSI device within a host that initiates I/O between the host and storage.

IOPS/W Input/output operations per second per watt. A metric for evaluating storage I/O performance per fixed unit of energy.

iSCSI Internet SCSI, an IETF standard for transporting SCSI block data over conventional TCP/IP networks.

iSER iSCSI Serial RDMA, an IETF specification to facilitate direct memory access by iSCSI network adapters.

The New Data Center 145

Page 165: 87652141 the-new-data-center-brocade

Glossary

ISL Inter-switch Link, Fibre Channel switch ports (E_Ports) used to provide switch-to-switch connectivity.

iSNS Internet Simple Name Server, an IETF specification to enable device registration and discovery in iSCSI environments.

Initiator In storage, a server or host system that initiates storage I/O requests.

kWh Kilowatt hours, a unit of electrical usage common used by power companies for billing purposes.

LACP Link Aggregation Control Protocol, an IEEE specification for grouping multiple separate network links between two switches to provide a faster logical link.

LAN Local area network, a network covering a small physical area, such as a home or office, or small groups of buildings, such as a campus or airport, typic allyl based on Ethernet and/or Wifi.

Lite (or Light) green Solutions or products that purport to be energy efficient but which have only negligible green benefits.

LUN Logical Unit Number, commonly used to refer to a volume of storage capacity configured on a target storage system.

LUN masking A means to restrict advertisement of available LUNs to prevent unauthorized or unintended storage access.

Layer 2 In networking, a link layer protocol for device-to device communication within the same subnet or network.

Layer 3 In networking, a routing protocol (for example, IP) that enables devices to communicate between different subnets or networks.

Layer 4–7 In networking, upper-layer network protocols (for example, TCP) that provide end-to-end connectivity, session management, and data formatting.

MAID Massive array of idle disks. A storage array that only spins up disks to active state when data in a disk set is accessed or written.

MAN Metropolitan area network, a mid-distance network often covering a metropolitan wide radius (about 200 km).

MaxTTD Maximum time to data. For a given category of storage, the maximum time allowed to service a data read or write.

MRP Metro Ring Protocol, a Brocade value-added protocol to enhance resiliency and recovery from a link or switch outage.

146 The New Data Center

Page 166: 87652141 the-new-data-center-brocade

Glossary

Metadata In storage virtualization, a data map that associates physical storage locations with logical storage locations.

Metric A standard unit of measurement, typically part of a system of measurements to quantify a process or event within a given domain. GB/W and IOPS/W are examples of proposed metrics that can be applied for evaluating the energy efficiency of storage systems.

Non-removable media library

A virtual tape backup system with spinning disks and shorter maximum time to data access compared to conventional tape.

NAS Network-attached storage, use of an optimized file server or appliance to provide shared file access over an IP network.

Near online storage Storage systems with longer maximum time to data access, typical of MAID and fixed content storage (CAS).

Network consolidation

Replacing multiple smaller switches and routers with larger switches that provide higher port densities, performance and energy efficiency.

Network virtualization

Technology that enables a single physical network infrastructure to be managed as multiple separate logical networks or for multiple physical networks to be managed as a single logical network.

NPIV N_Port ID Virtualization, a Fibre Channel standard that enables multiple logical network addresses to share a common physical network port.

OC3 A 155 Mbps WAN link speed.OLTP On-line Transaction Processing, commonly associated

with business applications that perform transactions with a database.

Online storage Storage systems with fast data access, typical of most data center storage arrays in production environments.

Open Systems A vendor-neutral, non-proprietary, standards-based approach for IT equipment design and deployment.

Orchestration Software that enables centralized coordination between virtualization capabilities in the server, storage, and network domains to automate data center operations.

PDU Power Distribution Unit. A system that distributes electrical power, typically stepping down the higher input voltage to voltages required by end equipment. A PDU can also be a single-inlet/multi-outlet device within a rack cabinet

The New Data Center 147

Page 167: 87652141 the-new-data-center-brocade

Glossary

Petabyte 1000 terabytesPoE/PoE+ Power over Ethernet, IEEE standards for powering IP

devices such as VoIP phones over Ethernet cabling.Port In Fibre Channel, a port is the physical connection

on a switch, host, or storage array. Each port has a personality (N_Port, E_Port, F_Port, and so on) and the personality defines the port's function within the overall Fibre Channel protocol.

QoS Quality of service, a means to prioritize network traffic on a per-application basis.

RAID Redundant array of independent disks, a storage technology for expediting reads and writes of data to disks and/or providing data recovery in the event of disk failure.

Raised floor Typical of older data center architecture, a raised floor provides space for cable runs between equipment racks and cold air flow for equipment cooling.

RBAC Role-based access control, network permissions based on defined roles or work responsibilities.

Removable media library

A tape or optical backup system with removable cartridges or disks and >80ms maximum time to data access.

Resizeable volumes Variable length volumes that can expand or contract depending on the data storage requirements of an application.

RPO Recovery point objective, defines how much data is lost in a disaster.

RSCN Registered state change notification, a Fibre Channel fabric feature that enables notification of storage resources leaving or entering the SAN.

RTO Recovery time objective, defines how long data access is unavailable in a disaster.

RSTP Rapid Spanning Tree Protocol, a bridging protocol that replaces conventional STP and enables an approximately 1-second recovery in the event of a primary link failure

SAN Storage area network, a shared network infrastructure deployed between servers, disk arrays, and tape subsystems, typically based on Fibre Channel.

SAN boot Firmware that enables a server to load its boot image across a SAN.

SCC Switch Connection Control, a Brocade SAN security mechanism to allow only authorized switch-to-switch links.

148 The New Data Center

Page 168: 87652141 the-new-data-center-brocade

Glossary

SI-EER Site Infrastructure Energy Efficiency Ratio, a formula developed by The Uptime Institute to calculate total data center power consumption in relation to IT equipment power consumption.

SLA Service-level agreement, typically a contracted assurance of response time or performance of an application.

SMB Small and medium business, companies typically with typically fewer than 1000 employees.

SMI-S Storage Management Initiative Specification, a SNIA standard based on CIM/WBEM for managing heterogeneous storage infrastructures.

SNIA Storage Networking Industry Association, a standards body focused on data storage hardware and software.

SNS Simple name server, a Fibre Channel switch feature that maintains a database of attached devices and capabilities to streamline device discovery.

Solid state storage A storage device based on flash or other static memory technology that emulates conventional spinning disk media.

SONET Synchronous Optical Networking, a WAN for multiplexing multiple protocols over a fiber optic infrastructure.

Server A compute platform used to host one or more applications for client access.

Server platform Hardware (typically CPU, memory, and I/O) used to support file or application access.

Server virtualization Software or firmware that enables multiple instances of an operating system and applications to be run on a single hardware platform.

sFlow An IETF specification for performing network packet captures at line speed for diagnostics and analysis.

Single Initiator Zoning

A method of securing traffic on a Fibre Channel fabric so that only the storage targets used by a host initiator can connect to that initiator.

Snapshot A point-in-time copy of a data set or volume used to restore data to a known good state in the event of data corruption or loss.

SPOF Single point of failure. Storage taxonomy A hierarchical categorization of storage networking

products based on capacity, availability, port count, and other attributes. A storage taxonomy is required for the development of energy efficiency metrics so that products in a similar class can be evaluated.

The New Data Center 149

Page 169: 87652141 the-new-data-center-brocade

Glossary

Storage virtualization

Technology that enables multiple storage arrays to be logically managed as a single storage pool.

Synchronous Data Replication

For storage, writing the same data to two separate storage systems on a write-by-write basis so that identical copies of current data are maintained, typically used for metro distance disaster recovery.

T3 A 45 Mbps WAN link speedTarget A SCSI target within a storage device that

communicates with a host SCSI initiator.TCP/IP Transmission Control Protocol/Internet Protocol, used

to move data in a network (IP) and to move data between cooperating computer applications (TCP). The Internet commonly relies on TCP/IP.

Terabyte 1000 bigabytesThin provisioning Allocating less physical storage to an application than

is indicated by the virtual volume size.Tiers Often applied to storage to indicate different cost/

performance characteristics and the ability to dynamically move data between tiers based on a policy such as ILM.

ToR Top of rack, provides network connectivity for a rack of equipment by provisioning one or more switches in the upper slots of each rack.

TRILL Transparent Interconnect for Lots of Links, an emerging IETF standard to enable multiple active paths through an IP network infrastructure.

Target In storage, a storage device or system that receives and executes storage I/O requests from a server or host.

Three-tier architecture

A network design that incorporates access, aggregation, and core layers to accommodate growth and maintain performance.

Top Talkers A Brocade technology for identifying the most active initiators in a storage network.

Trunking In Fibre Channel, a means to combine multiple inter-switch links (ISLs) to create a faster virtual link.

TWG Technical Working Group, commonly formed to define open, publicly available technology standards.

Type 1 virtualization Server virtualization in which the hypervisor runs directly on the hardware.

Type 2 virtualization Server virtualization in which the hypervisor runs inside an instance of an operating system.

150 The New Data Center

Page 170: 87652141 the-new-data-center-brocade

Glossary

U A unit of vertical space (1.75 inches) used to measure how much rack space a piece of equipment requires, sometimes expressed as RU (Rack Unit).

UPS Uninterruptible power supplyuRPF Unicast Reverse Path Forwarding, an IETF specification

for blocking packets from unauthorized network addresses.

VCS Virtual Cluster Switching, a new class of Brocade-developed technologies that overcomes the limitations of conventional Ethernet networking by applying non-stop operations, any-to-any connectivity, and the intelligence of fabric switching.

VLAN Virtual LAN, an IEEE standard that enables multiple hosts to be configured as a single network regardless of their physical location.

VM Virtual machine, one of many instances of a virtual operating system and applications hosted on a physical server.

VoIP Voice over IP. A method of carrying telephone traffic over an IP network.

VRF Virtual Routing and Forwarding, a means to enable a single physical router to maintain multiple separate routing tables and thus appear as multiple logical routers.

VRRP Virtual Router Redundancy Protocol, an IETF specification that enable multiple routers to be configured as a single virtual router to provide resiliency in the event of a link or route failure.

VSRP Virtual Switch Redundancy Protocol, a Brocade value-added protocol to enhance network resilience and recovery from a link or switch failure.

Virtual Fabrics An ANSI standard to create separate logical fabrics within a single physical SAN infrastructure, often spanning multiple switches.

Virtualization Technology that provides a logical abstraction layer between the administrator or user and the physical IT infrastructure.

WAN Wide area network, commonly able to span the globe. WAN networks commonly employ TCP/IP networking protocols.

WWN World Wide Name, a unique 64-bit identifier assigned to a Fibre Channel initiator or target.

Work cell A unit of rack-mounted IT equipment used to calculate energy consumption, developed by Intel.

The New Data Center 151

Page 171: 87652141 the-new-data-center-brocade

Glossary

Zetabyte 1000 exabytesZoning A Fibre Channel standard for assigning specific

initiators and targets as part of a separate group within a shared storage network infrastructure.

152 The New Data Center

Page 172: 87652141 the-new-data-center-brocade

Index

Symbols"Securing Fibre Channel Fabrics" by

Roger Bouchard 55

Aaccess control lists (ACLs) 27, 57Access Gateway 22, 28access layer 71

cabling 72oversubscription 72

Adaptive Networking services 48Address Resolution Protocol (ARP)

spoofing 78aggregation layer 71

functions 74air conditioning 5air flow systems 5ambient temperature 10, 14American National Standards

Institute T11.5 84ANSI/INCITS T11.5 standard 41ANSI/TIA-942 Telecommunications

Infrastructure Standard for Data Centers 2, 3

application delivery controllers 80performance 82

application load balancing 81, 85ASHRAE Thermal Guidelines for

Data Processing Environments 10asynchronous data replication 65Automatic Migration of Port Profiles

(AMPP) 120

Bbackup 59

The New Data Center

Bidirectional Forwarding Detection (BFD) 77

blade servers 21storage access 28VMs 22

blade.org 22blanking plates 13boot from SAN 24boot LUN discovery 25Brocade Management Pack for

Microsoft Service Center Virtual Machine Manager 86

Brocade Network Advisor 119Brocade One 117Brocade Virtual Access Layer

(VAL) 118Brocade Virtual Cluster Switching

(VCS) 118BTU (British Thermal Units) per

hour (h) 10

CCFC (chlorofluorocarbon) 14computer room air conditioning

(CRAC) 4, 14consolidation

data centers 70server 21

converged fabrics 118, 119cooling 14cooling towers 15core layer 71

functions 74customer-centric approach 117

153

Page 173: 87652141 the-new-data-center-brocade

Index

Ddark fiber 65, 67Data Center Bridging (DCB) 119data center consolidation 46, 48data center evolution 117Data Center Infrastructure Efficiency

(DCiE) 6data center LAN

bandwidth 69consolidation 76design 75infrastructure 70security 77server platforms 72

data encryption 56data encryption for data-at-rest. 27decommissioned equipment 13dehumidifiers 14denial of service (DoS) attacks 77dense wavelength division

multiplexing (DWDM) 65, 67Device Connection Control (DCC) 57disaster recovery (DR) 65distance extension 65

technologies 66distributed DoS (DDoS) attacks 77Distributed Management Task Force

(DMTF) 19, 84dry-side economizers 15

Eeconomizers 14EMC Invista software 44, 87Emerson Power survey 1encryption 56

data-in-flight 27encryption keys 56energy efficiency 7

Brocade DCX 54new technology 70product design 53, 79

Environmental Protection Agency (EPA) 10

EPA Energy Star 17Ethernet networks 69external air 14

154

FF_Port Trunking 28Fabric Application Interface

Standard (FAIS) 41, 84fabric management 119fabric-based security 55fabric-based storage

virtualization 41fabric-based zoning 26fan modules 53FastWrite acceleration 66Fibre Channel over Ethernet (FCoE)

compared to iSCSI 61Fibre Channel over IP (FCIP) 62FICON acceleration 66floor plan 11forwarding information base

(FIB) 78frame redirection in Brocade FOS 57Fujitsu fiber optic system 15

GGartner prediction 1Gigabit Ethernet 59global server load balancing

(GSLB) 82Green Storage Initiative (GSI) 53Green Storage Technical Working

Group (GS TWG) 53

HHCFC (hydrochlorofluorocarbon) 14high-level metrics 7Host bus adapters (HBAs) 23hot aisle/cold aisle 11HTTP (HyperText Transfer

Protocol) 80HTTPS (HyperText Transfer Protocol

Secure) 80humidifiers 14humidity 10humidity probes 15hypervisor 18

secure access 19

The New Data Center

Page 174: 87652141 the-new-data-center-brocade

Index

IIEEE

AES256-GCM encryption algo-rithm for tape 56

AES256-XTS encryption algorithm for disk 56

information lifecycle management (ILM) 39

ingress rate limiting (IRL) 49Integrated Routing (IR) 63Intel x86 18intelligent fabric 48inter-chassis links (ICLs) 28Invista software from EMC 44IP address spoofing 78IP network links 66IP networks

layered architecture 71resiliency 76

iSCSI 58Serial RDMA (iSER) 60

IT processes 83

Kkey management solutions 57

LLayer 4–7 70Layer 4–7 switches 80link congestion 49logical fabrics 63long-distance SAN connectivity 67

Mmanagement framework 85measuring energy consumption 12metadata mapping 42, 43Metro Ring Protocol (MRP) 77Multi- Chassis Trunking (MCT) 120

NN_Port ID Virtualization

(NPIV) 24, 28N_Port Trunking 23network health monitoring 85network segmentation 78

The New Data Center

Oopen systems approach 84Open Virtual Machine Format

(OVF) 84outside air 14ozone 14

Pparticulate filters 14Patterson and Pratt research 12power consumption 70power supplies 53preferred paths 50

Qquality of service

application tiering 49Quality of Service (QoS) 24, 26

RRapid Spanning Tree Protocol

(RSTP) 77recovery point objective (RPO) 65recovery time objective (RTO) 65refrigerants 14registered state change notification

(RSCN) 63RFC 3176 standard 77RFC 3704 (uRPF) standard 78RFC 3768 standard 76role-based access control (RBAC) 27routing information base (RIB) 78

SSAN boot 24SAN design 45, 46

storage-centric design 48security

SAN 55SAN security myths 55Web applications 81

security solutions 27Server and StorageIO Group 78

155

Page 175: 87652141 the-new-data-center-brocade

Index

server virtualization 18IP networks 69mainstream 86networking complement 79

service-level agreements (SLAs) 27network 80

sFlowRFC 3176 standard 77

simple name server (SNS) 60, 63Site Infrastructure Energy Efficiency

Ratio (SI-EER) 5software as a service (SaaS) 70Spanning Tree Protocol (STP) 73standardized units of joules 9state change notification (SCN) 45Storage Application Services

(SAS) 87Storage Networking Industry

Association (SNIA) 53Green Storage Power Measure-

ment Specification 53Storage Management Initiative

(SMI) 84storage virtualization 35

fabric-based 41metadata mapping 38tiered data storage 40

support infrastructure 4Switch Connection Control (SCC) 57synchronous data replication 65Synchronous Optical Networking

(SONET) 65

Ttape pipelining algorithms 66temperature probes 15The Green Grid 6tiered data storage 40Top Talkers 26, 51top-of-rack access solution 73traffic isolation (TI) 51traffic prioritization 26Transparent Interconnection of Lots

of Links (TRILL) 119

156

UUnicast Reverse Path Forwarding

(uRPF) 78UPS systems 3Uptime Institute 5

Vvariable speed fans 12Virtual Cluster Switching (VCS)

architecture 122Virtual Fabrics (VF) 62virtual IPs (VIPs) 79virtual LUNs 37virtual machines (VMs) 17

migration 86, 120mobility 20

Virtual Router Redundancy Protocol (VRRP) 76

Virtual Routing and Forwarding (VRF) 78

virtual server pool 20Virtual Switch Redundancy Protocol

(VSRP) 77virtualization

network 79orchestratration 84server 18storage 35

Virtualization Management Initiative (VMAN) 19

VM mobilityIP networks 70

VRRP Extension (VRRPE) 76

Wwet-side economizers 15work cell 12World Wide Name (WWN) 25

The New Data Center

Page 176: 87652141 the-new-data-center-brocade