10 Gigabit Ethernet Virtual Data Center Architectures

11
10 Gigabit Ethernet Virtual Data Center Architectures © 2007 FORCE10 NETWORKS, INC. [ PAGE 1 OF 11 ] PAPER White Introduction Consolidation of data center resources offers an opportunity for architectural transformation based on the use of scalable, high density, high availability technology solutions, such as high port-density 10 GbE switch/routers, cluster and grid computing, blade or rack servers, and network attached storage. Consolidation also opens doors for virtualization of applications, servers, storage, and networks. This suite of highly complementary technologies has now matured to the point where mainstream adoption in large data centers has been occurring for some time. According to a recent Yankee Group survey of both large and smaller enterprises, 62% of respondents already have a server virtualization solution at least partially in place, while another 21% plan to deploy the technology over the next 12 months. A consolidated and virtualized 10 GbE data center offers numerous benefits: Lower OPEX/CAPEX and TCO through reduced complexity,reductions in the number of physical servers and switches, improved lifecycle management, and better human and capital resource utilization Increased adaptability of the network to meet changing business requirements Reduced requirements for space, power, cooling, and cabling. For example, in power/cooling (P/C) alone, the following savings are possible: – Server consolidation via virtualization: up to 50–60% of server P/C – Server consolidation via Blade or Rack servers: up to an additional 20–30% of server P/C – Switch consolidation with high density switching: up to 50% of switch P/C Improved business continuance and compliance with regulatory security standards The virtualized 10 GbE data center also provides the foundation for a service oriented architecture (SOA). From an application perspective, SOA is a virtual application architecture where the application is comprised of a set of component services (e.g., implemented with web services) that may be distributed throughout the data center or across multiple data centers. SOA’s emphasis on application modularity and re-use of application component modules enables enterprises to readily create high level application services that encapsulate existing business processes and functions, or address new business requirements. From an infrastructure perspective, SOA is a resource architecture where applications and services draw on a shared pool of resources rather than having physical resources rigidly dedicated to specific applications. The application and infrastructure aspects of SOA are highly complementary. In terms of applications, SOA offers a methodology to dramatically increase productivity in application creation/modification, while the SOA-enabled infrastructure, embodied by the 10 GbE virtual data center, dramatically improves the flexibility, productivity, and manageability of delivering application results to end users by drawing on a shared pool of virtualized computing, storage, and networking resources. This document provides guidance in designing consolidated, virtualized, and SOA-enabled data centers based on the ultra high port-density 10 GbE switch/router products of Force10 Networks in conjunction with other specialized hardware and software components provided by Force10 technology partners, including those offering: Server virtualization and server management software iSCSI storage area networks GbE and 10 GbE server NICs featuring I/O virtualization and protocol acceleration Application delivery switching, load balancers, and firewalls

description

 

Transcript of 10 Gigabit Ethernet Virtual Data Center Architectures

Page 1: 10 Gigabit Ethernet Virtual Data Center Architectures

10 Gigabit Ethernet Virtual Data Center Architectures

© 2 0 0 7 F O R C E 1 0 N E T W O R K S , I N C . [ P A G E 1 O F 1 1 ]

PAPERWhite

IntroductionConsolidation of data center resources offers an opportunity for architectural transformation based on the use of scalable, high density, high availability technology solutions, such as high port-density 10 GbE switch/routers,cluster and grid computing, blade or rack servers, and network attached storage. Consolidation also opens doors for virtualization of applications, servers, storage, and networks. This suite of highly complementary technologies has now matured to the point where mainstream adoption in large data centers has been occurring for some time.

According to a recent Yankee Group survey of both large and smaller enterprises, 62% of respondents already have a server virtualization solution at least partially in place, while another 21% plan to deploy the technology over the next 12 months.

A consolidated and virtualized 10 GbE data center offers numerous benefits:

• Lower OPEX/CAPEX and TCO through reduced complexity, reductions in the number of physical servers and switches, improved lifecycle management, and better human and capital resource utilization

• Increased adaptability of the network to meet changing business requirements• Reduced requirements for space, power, cooling, and cabling. For example, in power/cooling (P/C) alone,

the following savings are possible:– Server consolidation via virtualization: up to 50–60% of server P/C– Server consolidation via Blade or Rack servers: up to an additional 20–30% of server P/C– Switch consolidation with high density switching: up to 50% of switch P/C

• Improved business continuance and compliance with regulatory security standards

The virtualized 10 GbE data center also provides the foundation for a service oriented architecture (SOA). From an application perspective, SOA is a virtual application architecture where the application is comprised of a set of component services (e.g., implemented with web services) that may be distributed throughout the data center or across multiple data centers. SOA’s emphasis on application modularity and re-use of application component modules enables enterprises to readily create high level application services that encapsulate existing businessprocesses and functions, or address new business requirements.

From an infrastructure perspective, SOA is a resource architecture where applications and services draw on a sharedpool of resources rather than having physical resources rigidly dedicated to specific applications. The application and infrastructure aspects of SOA are highly complementary. In terms of applications, SOA offers a methodology to dramatically increase productivity in application creation/modification, while the SOA-enabled infrastructure,embodied by the 10 GbE virtual data center, dramatically improves the flexibility, productivity, and manageability of delivering application results to end users by drawing on a shared pool of virtualized computing, storage, and networking resources.

This document provides guidance in designing consolidated, virtualized, and SOA-enabled data centers based on the ultra high port-density 10 GbE switch/router products of Force10 Networks in conjunction with other specializedhardware and software components provided by Force10 technology partners, including those offering:

• Server virtualization and server management software• iSCSI storage area networks• GbE and 10 GbE server NICs featuring I/O virtualization and protocol acceleration • Application delivery switching, load balancers, and firewalls

Page 2: 10 Gigabit Ethernet Virtual Data Center Architectures

The Foundation for a Service Oriented Architecture

Over the last several years data center managers havehad to deal with the problem of server sprawl to meetthe demand for application capacity. As a result, theprevalent legacy enterprise data center architecturehas evolved as a multi-tier structure patterned afterhigh volume websites. Servers are organized into threeseparate tiers of the data center network comprised of web or front-end servers, application servers, anddatabase/back-end servers, as shown in Figure 1. Thisarchitecture has been widely adapted to enterpriseapplications such as ERP and CRM, that support web-based user access.

© 2 0 0 7 F O R C E 1 0 N E T W O R K S , I N C . [ P A G E 2 O F 1 1 ]

Multiple tiers of physically segregated servers as shownin Figure 1 are frequently employed because a singletier of aggregation and access switches may lack thescalability to provide the connectivity and aggregateperformance needed to support large numbers ofservers. The ladder structure of the network shown inFigure 1 also minimizes the traffic load on the datacenter core switches because it isolates intra-tier traffic,web-to-application traffic, and application-to-databasetraffic from the data center core.

While this legacy architecture has performed fairlywell, it has some significant drawbacks. The physicalsegregation of the tiers requires a large number ofdevices, including three sets of Layer 2 access switches,three sets of Layer 2/Layer 3 aggregation switches, andthree sets of appliances such as load balancers, fire-walls, IDS/IPS devices, and SSL offload devices that are not shown in the figure. The proliferation of devicesis further exacerbated by dedicating a separate datacenter module similar to that shown in Figure 1 toeach enterprise application, with each server running a single application or application component. Thisphysical application/server segregation typically resultsin servers that are, on average, only 20% utilized. Thiswastes 80% of server capital investment and supportcosts. As a result, the inefficiency of dedicated physicalresources per application is the driving force behindon-going efforts to virtualize the data center.

The overall complexity of the legacy design has a number of undesirable side-effects:

• The infrastructure is difficult to manage, especially when additional applications or application capacity is required

• Optimizing performance requires fairly complex traffic engineering to ensure that traffic flows follow predictable paths

• When load-balancers, firewalls, and other appliances are integrated within the aggregation switch/router to reduce box count, it may be necessary to use active-passive redundancy configurations rather than the more efficient active-active redundancy more readily achieved with stand alone appliances. Designs callingfor active-passive redundancy for appliances and switches in the aggregation layer require twice as much throughput capacity as active-active redundancy designs

• The total cost of ownership (TCO) is high due to lowresource utilization levels combined with the impact of complexity on downtime and on the requirements for power, cooling, space, and management time

Figure 1. Legacy three tier data center architecture

Page 3: 10 Gigabit Ethernet Virtual Data Center Architectures

Virtualization provides the stability of running a singleapplication per (virtual) server, while greatly reducingthe number of physical servers required and improvingutilization of server resources. VM technology alsogreatly facilitates the mobility of applications amongvirtual servers and the provisioning of additional serverresources to satisfy fluctuations in demand for criticalapplications. Server virtualization and cluster computingare highly complementary technologies for fullyexploiting emerging multi-core CPU microprocessors.VM technology provides robustness in running multiple applications per core plus facilitating mobilityof applications across VMs and cores. Cluster comput-ing middleware allows multiple VMs or multiple coresto collaborate in the execution of a single application.For example, VMware Virtual SMP™ enables a singlevirtual machine to span multiple physical cores, virtualizing processor-intensive enterprise applicationssuch as ERP and CRM. The VMware Virtual MachineFile System (VMFS) is a high-performance cluster filesystem that allows clustering of virtual machines spanning multiple physical servers. By 2010, the number of cores per server CPU is projected to be inthe range of 16-64 with network I/O requirements inthe 100 Gbps range. Since most near-term growth inchip-based CPU performance will come from highercore count rather than increased clock rate, the datacenters requiring higher application performance willneed to place increasing emphasis on technologiessuch as cluster computing and Virtual SMP.

NIC Virtualization: With numerous VMs per physicalserver, network virtualization has to be extended to theserver and its network interface. Each VM is configuredwith a virtual NIC that shares the resources of the server’s array of real NICs. This level of virtualization,together with a virtual switch capability providing inter-VM switching on a physical server, is provided byVMware Infrastructure software. Higher performanceI/O virtualization is possible using intelligent NICs that provide hardware support for I/O virtualization,off-loading the processing supporting protocol stacks,virtual NICs, and virtual switching from the serverCPUs. NICs that support I/O virtualization as well asprotocol offload (e.g., TCP/IP, RDMA, iSCSI) are availablefrom Force10 technology partners including NetXen,Neterion, Chelsio, NetEffect, and various server vendors.Benchmark results have shown that protocol offloadNICs can dramatically improve network throughputand latency for both data applications (e.g., HPC, clustered databases, and web servers) and networkstorage access (NAS and iSCSI SANs).

© 2 0 0 7 F O R C E 1 0 N E T W O R K S , I N C . [ P A G E 3 O F 1 1 ]

Design Principles for the Next GenerationVirtual Data Centers

The Force10 Networks approach to next generationdata center designs is to build on the legacy architec-ture’s concept of modularity, but to greatly simplify thenetwork while significantly improving its efficiency,scalability, reliability, and flexibility, resulting in muchlower low total cost of ownership. This is accomplishedby consolidating and virtualizing the network, computing, and storage resources, resulting in an SOA-enabled data center infrastructure.

Following are the key principles of data center consoli-dation and virtualization upon which the Virtual DataCenter Architecture is based:

POD Modularity: A POD (point of delivery) is a groupof compute, storage, network, and application softwarecomponents that work together to deliver a service orapplication. The POD is a repeatable construct, and itscomponents must be consolidated and virtualized tomaximize the modularity, scalability, and manageabilityof data centers. Depending on the architectural modelfor applications, a POD may deliver a high level appli-cation service or it may provide a single component ofan SOA application, such as web front end or databaseservice. In spite of the fact that the POD modules sharea common architecture, they can be customized tosupport a tiered services model. For example, the security, resiliency/availability, and QoS capabilities of an individual POD can be adjusted to meet the service level requirements of the specific application or service that it delivers. Thus, an eCommerce PODwould be adapted to deliver the higher levels of security/availability/QoS required vs. those suitable for lower tier applications, such as email.

Server Consolidation and Virtualization: Server virtualization based on virtual machine (VM) tech-nology, such as VMware ESX Server, allows numerousvirtual servers to run on a single physical server, asshown in Figure 2.

Figure 2. Simplified view of virtual machine technology

Page 4: 10 Gigabit Ethernet Virtual Data Center Architectures

© 2 0 0 7 F O R C E 1 0 N E T W O R K S , I N C . [ P A G E 4 O F 1 1 ]

Figure 3. Consolidation of data center aggregation and access layers

Network Consolidation and Virtualization: Highly scalable and resilient 10 Gigabit Ethernet switch/routers,exemplified by the Force10 E-Series, provide the oppor-tunity to greatly simplify the network design of the PODmodule, as well as the data center core. LeveragingVLAN technology together with the E-Series scalabilityand resiliency allows the distinct aggregation and accesslayers of the legacy data center design to be collapsedinto a single aggregation/access layer of switch/routing,as shown in Figure 3.

The integrated aggregation/access switch becomes thebasic network switching element upon which a POD is built.

The benefits of a single layer of switch/routing withinthe POD include reduced switch-count, simplified traf-fic flow patterns, elimination of Layer 2 loops and STPscalability issues, and improved overall reliability. Theultra high density, reliability, and performance of the E-Series switch/router maximizes the scalability of thedesign model both within PODs and across the datacenter core. The scalability of the E-Series often enablesnetwork consolidations with a >3:1 reduction in thenumber of data center switches. This high reduction factor is due to the combination of the following factors:

• Elimination of the access switching layer.• More servers per POD aggregation switch, resulting

in fewer aggregation switches.• More POD aggregation switches per core switch,

resulting in fewer core switches.

Storage Resource Consolidation and Virtualization:Storage resources accessible over the Ethernet/IP datanetwork further simplify the data center LAN by mini-mizing the number of separate switching fabrics thatmust be deployed and managed. 10 GbE switching inthe POD provides ample bandwidth for accessing unified NAS/iSCSI IP storage devices, especially when compared to the bandwidth available for FibreChannel SANs. Consolidated, shared, and virtualizedstorage also facilitates VM-based application provision-ing and mobility since each physical server has sharedaccess to the necessary virtual machine images andrequired application data. The VMFS provides multipleVMware ESX Servers with concurrent read-write accessto the same virtual machine storage. The cluster filesystem thus enables live migration of running virtualmachines from one physical server to another, auto-matic restart of failed virtual machines on a differentphysical server, and the clustering of virtual machines.

Global Virtualization: Virtualization should not beconstrained to the confines of the POD, but should becapable of being extended to support a pool of sharedresources spanning not only a single POD, but alsomultiple PODs, the entire data center, or even multipledata centers. Virtualization of the infrastructure allowsthe PODs to be readily adapted to an SOA applicationmodel where the resource pool is called upon torespond rapidly to changes in demand for services and to new services being installed on the network.

Page 5: 10 Gigabit Ethernet Virtual Data Center Architectures

© 2 0 0 7 F O R C E 1 0 N E T W O R K S , I N C . [ P A G E 5 O F 1 1 ]

Ultra Resiliency/Reliability: As data centers are consoli-dated and virtualized, resiliency and reliability becomeeven more critical aspects of the network design. This is because the impact of a failed physical resource isnow more likely to extend to multiple applications andlarger numbers of user flows. Therefore, the virtual datacenter requires the combination of ultra high resiliencydevices, such as the E-Series switch/routers, and an end-to-end network design that takes maximum advantageof active-active redundancy configurations, with rapidfail-over to standby resources.

Security: Consolidation and virtualization also placeincreased emphasis on data center network security.With virtualization, application or administrativedomains may share a pool of common resources creating the requirement that the logical segregationamong virtual resources be even stronger than the physical segregation featured in the legacy data centerarchitecture. This level of segregation is achieved byhaving multiple levels of security at the logical bound-aries of the resources being protected within the PODs and throughout the data center. In the virtual data center, security is provided by:

• Full virtual machine isolation to prevent ill-behaved or compromised applications from impacting any other virtual machine/application in the environment

• Application and control VLANs to provide traffic segregation

• Wire-rate switch/router ACLs applied to intra-POD and inter-POD traffic

• Stateful virtual firewall capability that can be customized to specific application requirements within the POD

• Security-aware appliances for load balancing and other traffic management and acceleration functions

• IDS/IPS appliance functionality at full wire-rate for real-time protection of critical POD resources from both known intrusion methodologies and day-one attacks

• AAA for controlled user access to the network and network devices to enforce policies defining user authentication and authorization profiles

Figure 4 provides an overview of the architecture of aconsolidated data center based on 10 Gigabit Ethernetswitch/routers providing an integrated layer of aggrega-tion and access switching with Layer 4–Layer 7 servicesbeing provided with stand alone appliances. The consoli-dation of the data center network simplifies deploymentof virtualization technologies that will be described inmore detail in subsequent sections of this document.

Overall data center scalability is addressed by configuringmultiple PODs connected to a common set of data center core switches to meet application/service capacity,organizational, and policy requirements. In addition toserver connectivity, the basic network design of thePOD can be utilized to provide other services on thenetwork, such as ISP connectivity, WAN access, etc.

Within an application POD, multiple servers runningthe same application are placed in the same applicationVLAN with appropriate load balancing and security services provided by the appliances. Enterprise applica-tions, such as ERP, that are based on distinct, segregatedsets of web, application, and database servers can beimplemented within a single tier of scalable L2/L3switching using server clustering and distinct VLANs for segregation of web servers, application servers, anddatabase servers. Alternatively, where greater scalabilityis required, the application could be distributed across a web server POD, an application server POD, and adatabase POD.

Further simplification of the design is achieved usingIP/Ethernet storage attachment technologies, such asNAS and iSCSI, with each application’s storage resourcesincorporated within the application-specific VLAN.

Figure 4. Reference design for the virtual data center

Page 6: 10 Gigabit Ethernet Virtual Data Center Architectures

© 2 0 0 7 F O R C E 1 0 N E T W O R K S , I N C . [ P A G E 6 O F 1 1 ]

This section of the document focuses on the variousdesign aspects of the consolidated and virtualized data center POD module.

Network Interface Contoller (NIC) Teaming

As noted earlier, physical and virtual servers dedicatedto a specific application are placed in a VLAN reservedfor that application. This simplifies the logical design of the network and satisfies the requirement of manyclustered applications for Layer 2 adjacency amongnodes participating in the cluster.

In order to avoid single points of failure (SPOF) in theaccess portion of the network, NIC teaming is recom-mended to allow each physical server to be connectedto two different aggregation/access switches. For exam-ple, a server with two teamed NICs, sharing a commonIP address and MAC address, can be connected to bothPOD switches as shown in Figure 5. The primary NIC isin the active state, and the secondary NIC is in standbymode, ready to be activated in the event of failure in theprimary path to the POD.

NIC teaming can also used for bonding several GbE NICsto form a higher speed link aggregation group (LAG)connected to one of the POD switches. As 10 GbEinterfaces continue to ride the volume/cost curve, GbENIC teaming will become a relatively less cost-effectivemeans of increasing bandwidth per server.

NIC VirtualizationWhen server virtualization is deployed, a number ofVMs generally share a physical NIC. Where the VMs arespread across multiple applications, the physical NICneeds to support traffic for multiple VLANs. An elegantsolution for multiple VMs and VLANs sharing a physicalNIC is provided by VMware ESX Server Virtual SwitchTagging (VST). As shown in Figure 6, each VM’s virtualNICs are attached to a port group on the ESX ServerVirtual Switch that corresponds to the VLAN associatedwith the VM’s application. The virtual switch then adds802.1Q VLAN tags to all outbound frames, extending802.1Q trunking to the server and allowing multipleVMs to share a single physical NIC.

The overall benefits of NIC teaming and I/O virtualizationcan be combined with VMware Infrastructure’s ESXServer V3.0 by configuring multiple virtual NICs per VMand multiple real NICs per physical server. ESX ServerV3.0 NIC teaming supports a variety of fault tolerantand load sharing operational modes in addition to thesimple primary/secondary teaming model described atthe beginning of this section. Figure 7 shows how VST,together with simple primary/secondary NIC teaming,supports red and green VLANs while eliminating SPOFsin a POD employing server virtualization.

Figure 5. NIC teaming for data center servers

Figure 6. VMware virtual switch tagging with NIC teaming

Page 7: 10 Gigabit Ethernet Virtual Data Center Architectures

© 2 0 0 7 F O R C E 1 0 N E T W O R K S , I N C . [ P A G E 7 O F 1 1 ]

As noted earlier, for improved server and I/O perfor-mance, the virtualization of NICs, virtual switching, andVLAN tagging can be offloaded to intelligent Ethernetadapters that provide hardware support for protocol processing, virtual networking, and virtual I/O.

Layer 2 Aggregation/Access Switching

With a collapsed aggregation/access layer of switching,the Layer 2 topology of the POD is extremely simplewith servers in each application VLAN evenly distributedacross the two POD switches. This distributes the trafficacross the POD switches, which form an active-activeredundant pair. The Layer 2 topology is free from loopsfor intra-POD traffic. Nevertheless, for extra robustness,it is recommended that application VLANs be protectedfrom loops that could be formed by configuration errorsor other faults, using standard practices for MSTP/RSTP

The simplicity of the Layer 2 network makes it feasiblefor the POD to support large numbers of real and virtualservers, and also makes it feasible to extend applicationVLANs through the data center core switch/router toother PODs in the data center or even to PODs in otherdata centers. When VLANs are extended beyond thePOD, per-LAN MSTP/RSTP is required to deal with possible loops in the core of the network.

In addition, it may also be desirable to allocate applica-tions to PODs in a manner that minimizes data flowsbetween distinct application VLAN within the POD. This preserves the POD’s horizontal bandwidth for intra-VLAN communications between clustered serversand for Ethernet/IP-based storage access.

Layer 3 Aggregation/Access Switching

Figure 8 shows the logical flow of application trafficthrough a POD. For web traffic from the Internet, trafficis routed in the following way:

1.Internet flows are routed with OSPF from the core to a VLAN/security zone for untrusted traffic based on public, virtual IP addresses (VIPs).

2.Load balancers (LBs) route the traffic to another untrusted VLAN, balancing the traffic based on the private, real IP addresses of the servers. Redundant load balancers are configured with VRRP for gateway redundancy. For load balanced applications, the LBs function as the default virtual gateway.

3.Finally, traffic is routed by firewalls (FWs) to the trusted application VLANs on which the servers reside. The firewalls also use VRRP for gateway redundancy. For applications requiring stateful inspection of flows but no load balancing, the firewalls function as the default virtual gateway.

Intranet traffic would be routed through a somewhat different set of VLAN security zones based on whetherload balancing is needed and the degree of trust placedin the source/destination for that particular applicationflow. In many cases, Intranet traffic would bypassuntrusted security zones, with switch/router ACLs providing ample security to allow Intranet traffic to berouted through the data center core directly from oneapplication VLAN to another without traversing loadbalancing or firewall appliances.

Figure 7. VMware virtual switch tagging with NIC teaming

Figure 8. Logical topology for Internet flows in the POD

Page 8: 10 Gigabit Ethernet Virtual Data Center Architectures

© 2 0 0 7 F O R C E 1 0 N E T W O R K S , I N C . [ P A G E 8 O F 1 1 ]

In addition to application VLANs, control VLANs are configured to isolate control traffic among the networkdevices from application traffic. For example, controlVLANs carry routing updates among switch/routers. Inaddition, a redundant pair of load balancers or statefulfirewalls would share a control VLAN to permit trafficflows to failover from the primary to the secondary appli-ance without loss of state or session continuity. In a typicalnetwork design, trunk links carry a combination of trafficfor application VLANs and link-specific control VLANs.

From the campus core switches through the data centercore switches, there are at least two equal cost routes to the server subnets. This permits the core switches toload balance Layer 3 traffic to each POD switch usingOSPF ECMP routing. Where application VLANs areextended beyond the POD, the trunks to and among the data center core switches will carry a combinationof Layer 2 and Layer 3 traffic.

Layer 4-7 Aggregation/Access Switching

Because the POD design is based on stand-alone appli-ances for Layer 4-7 services (including server load bal-ancing, SSL termination/acceleration, VPN termination,and firewalls), data center designers are free to deploydevices with best-in-class functionality and performancethat meet the particular application requirements withineach POD. For example, Layer 4-7 devices may supporta number of advanced features, including:

• Integrated functionality: For example, load balancing,SSL acceleration, and packet filtering functionality may be integrated within a single device, reducing box count, while improving the reliability and manageability of the POD

• Device Virtualization: Load balancers and firewalls that support virtualization allow physical device resources to be partitioned into multiple virtual devices, each with its own configuration. Device virtualization within the POD allows virtual appliancesto be devoted to each application, with the configura-tion corresponding to the optimum device behavior for that application type and its domain of administration

• Active/Active Redundancy: Virtual appliances also facilitate high availability configurations where pairs of physical devices provide active-active redundancy. For example, a pair of physical firewalls can be configured with one set of virtual firewalls customized to each of the red VLANs and a second set customized for each of the green VLANs. The physical firewall attached to aPOD switch would have the red firewalls in an active

state and its green firewalls in a standby state. The second physical firewall (connected to the second POD switch) would have the complementary configuration. Inthe event of an appliance or link failure, all of the active virtual firewalls on the failed device would fail over to the standby virtual firewalls on the remaining device

Resource Virtualization Within and Across PODs

One of the keys to server virtualization within andacross PODs is a server management environment forvirtual servers that automates operational proceduresand optimizes availability and efficiency in utilization of the resource pool.

The VMware Virtual Center provides the server manage-ment function for VMware Infrastructure, including ESX Server, VMFS, and Virtual SMP. With Virtual Center,virtual machines can be provisioned, configured, started,stopped, deleted, relocated, and remotely accessed. In addition, Virtual Center supports high availability byallowing a virtual machine to automatically fail-over to another physical server in the event of host failure.All of these operations are simplified because virtualmachines are completely encapsulated in virtual diskfiles stored centrally using shared NAS or iSCSI SANstorage. The Virtual Machine File System allows a server resource pool to concurrently access the samefiles to boot and run virtual machines, effectively virtualizing VM storage.

Virtual Center also supports the organization of ESXServers and their virtual machines into clusters allowingmultiple servers and virtual machines to be managed as a single entity. Virtual machines can be provisionedto a cluster rather than linked to a specific physical host, adding another layer of virtualization to the pool of computing resources.

VMware VMotion enables the live migration of runningvirtual machines from one physical server to anotherwith zero downtime, continuous service availability,complete transaction integrity, and continuity of net-work connectivity via the appropriate applicationVLAN. Live migration of virtual machines enables hardware maintenance without scheduling downtimeand resulting disruption of business operations.VMotion also allows virtual machines to be continu-ously and automatically optimized within resourcepools for maximum hardware utilization, flexibility, and availability.

Page 9: 10 Gigabit Ethernet Virtual Data Center Architectures

© 2 0 0 7 F O R C E 1 0 N E T W O R K S , I N C . [ P A G E 9 O F 1 1 ]

VMware Distributed Resource Scheduler (DRS) workswith VMware Infrastructure to continuously automatethe balancing of virtual machine workloads across acluster in the virtual infrastructure. When guaranteedresource allocation cannot be met on a physical server,DRS will use VMotion to migrate the virtual machine toanother host in the cluster that has the needed resources.

Figure 9 shows an example of server resource re-alloca-tion within a POD. In this scenario, a group of virtualand/or physical servers currently participating in clusterA is re-allocated to a second cluster B running anotherapplication. Virtual Center and VMotion are used to de-install the cluster A software images from the serversbeing transferred and then install the required cluster Bimage including application, middleware, operating system, and network configuration. As part of theprocess, the VLAN membership of the transferredservers is changed from VLAN A to VLAN B.

Virtualization of server resources, including VMotion-enabled automated VM failovers, and resource re-allo-cation as described above, can readily be extendedacross PODs simply by extending the applicationVLANs across the data center core trunks using 802.1Q VLAN trunking. Therefore, the two clustersshown in Figure 9 could just as well be located in distinct physical PODs. With VLAN extension, a virtualPOD can be defined that spans multiple physical PODs.Without this form of POD virtualization, it would benecessary to use patch cabling between physical PODsin order to extend the computing resources available toa given application. Patch cabling among physicalPODs is an awkward solution for ad hoc connectivity,especially when the physical PODs are on separatefloors of the data center facility.

As noted earlier, the simplicity of the POD Layer 2 network makes this VLAN extension feasible withoutrunning the risk of STP-related instabilities. With application VLANs and cluster membership extendedthroughout the data center, the data center trunks carrya combination of Layer 3 and Layer 2 traffic, potentiallywith multiple VLANs per trunk, as shown in Figure 10.The 10 GbE links between the PODs provide amplebandwidth to support VM clustering, VMotion transfersand failovers, as well as access to shared storageresources.

Figure 9. Re-allocation of server resources within the POD Figure 10. Multiple VLANs per trunk

Page 10: 10 Gigabit Ethernet Virtual Data Center Architectures

© 2 0 0 7 F O R C E 1 0 N E T W O R K S , I N C . [ P A G E 1 0 O F 1 1 ]

Resource Virtualization Across Data Centers

Resource virtualization can also be leveraged amongdata centers sharing the same virtual architecture. As aresult, Virtual Center management of VMotion-basedbackup and restore operations can provide redundancyand disaster recovery capabilities among enterprise datacenter sites. This form of global virtualization is basedon an N x 10 GbE Inter-Data Center backbone, whichcarries a combination of Layer 2 and Layer 3 trafficresulting from extending application and control VLANsfrom the data center cores across the 10 GbEMAN/WAN network, as shown in Figure 11.

In this scenario, policy routing and other techniqueswould be employed to keep traffic as local as possible,using remote resources only when local alternatives arenot appropriate or not currently available. RedundantVirtual Center server management operations centersensure the availability and efficient operation of theglobally virtualized resource pool even if entire datacenters are disrupted by catastrophic events.

Migration from Legacy Data CenterArchitectures

The best general approach to migrating from a legacy 3-tier data center architecture to a virtual data centerarchitecture is to start at the server level and follow astep-by-step procedure replacing access switches, distribution/aggregation switches, and finally data center core switches. One possible blueprint for such a migration is as follows:

1.Select an application for migration. Upgrade and virtualize the application’s servers with VMware ESX Server software and NICs as required to support the desired NIC teaming functionality and/or NIC Virtualization. Install VMware Virtual Center in the NOC.

2.Replace existing access switches specific to the chosen application with E-Series switch/routers. Establish a VLAN for the application if necessary and configure the E-Series switch to conform to the existing access networking model.

3.Migrate any remaining applications supported by the set of legacy distribution switches in question to E-Series access switches.

4.Transition load balancing and firewall VLAN connectivity to the E-Series along with OSPF routing among the application VLANs. Existing distribution switches still provide connectivity to the data center core.

5.Introduce new E-Series data center core switch/routers with OSPF and 10 GbE, keeping the existing core routers in place. If necessary, configure OSPF in old core switches and re-distribute routes from OSPF to the legacy routing protocol and vice versa.

6.Remove the set of legacy distribution switches and use the E-series switches for all aggregation/access functions. At this point, a single virtualized POD has been created.

7.Now the process can be repeated until all applications and servers in the data center have been migrated to integrated PODs. The legacy data center core switches can be removed either before or after full POD migration.

Figure 11. Global virtualization

Page 11: 10 Gigabit Ethernet Virtual Data Center Architectures

Summary

As enterprise data centers move through consolidationphases toward next generation architectures thatincreasingly leverage virtualization technologies, the importance of very high performance Ethernetswitch/routers will continue to grow. Switch/routers with ultra high capacity coupled with ultra high reliability/resiliency contribute significantly to the simplicity and attractive TCO of the virtual data center. In particular, the E-Series offers a number of advantages for this emerging architecture:

• Smallest footprint per GbE port or per 10 GbE port due to highest port densities

• Ultra-high power efficiency requiring only 4.7 watts per GbE port, simplifying high density configurations and minimizing the growing costs of power and cooling

• Ample aggregate bandwidth to support unification of aggregation and access layers of the data center network plus unification of data and storage fabrics

• System architecture providing a future-proof migration path to the next generation of Ethernet consolidation/virtualization/unification at 100 Gbps

• Unparalleled system reliability and resiliency featuring:– multi-processor control plane– control plane and switching fabric redundancy– modular switch/router operating system (OS)

supporting hitless software updates and restarts

A high performance 10 GbE switched data centerinfrastructure provides the ideal complement for localand global resource virtualization. The combination of these fundamental technologies as described in this guide provides the basic SOA-enabled modularinfrastructure needed to fully support the next wave ofSOA application development where an application’scomponent services may be transparently distributedthroughout the enterprise data center or even amongdata centers.

References:

General discussion of Data Center Consolidation and Virtualization:www.force10networks.com/products/pdf/wp_datacenter_con-virt.pdf

E-Series Reliability and Resiliency:www.force10networks.com/products/highavail.asp

Next Generation Terabit Switch/Routers:www.force10networks.com/products/nextgenterabit.asp

High Performance Network Security (IPS):www.force10networks.com/products/hp_network_security.asp

iSCSI over 10 GbE:www.force10networks.com/products/iSCSI_10GE.asp

VMware Infrastructure 3 Documentation:www.vmware.com/support/pubs/vi_pubs.html

© 2007 Force10 Networks, Inc. All rights reserved. Force10 Networks and E-Series are registered trademarks, andForce10, the Force10 logo, P-Series, S-Series, TeraScale and FTOS are trademarks of Force10 Networks, Inc. All othercompany names are trademarks of their respective holders. Information in this document is subject to change withoutnotice. Certain features may not yet be generally available. Force10 Networks, Inc. assumes no responsibility for any errorsthat may appear in this document.

WP18 107 v1.2

Force10 Networks, Inc.350 Holger WaySan Jose, CA 95134 USAwww.force10networks.com

408-571-3500 PHONE

408-571-3550 FACSIMILE

© 2 0 0 7 F O R C E 1 0 N E T W O R K S , I N C . [ P A G E 1 1 O F 1 1 ]