Solutions Lab Validation Test 1
Validation Test Report Brocade VCS Fabric Technology with Tintri T620 Hybrid Flash Array (NOS 7.0.0)
Solutions Lab Validation Test 2
Copyright Brocade, the B-‐wing symbol, BigIron, DCX, Fabric OS, FastIron, NetIron, SAN Health, ServerIron, and TurboIron are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, CloudPlex, MLX, VCS, VDX, and When the Mission Is Critical, the Network Is Brocade are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government.
The authors and Brocade Communications Systems, Inc. shall have no liability or responsibility to any person or entity with respect to any loss, cost, liability, or damages arising from the information contained in this book or the computer programs that accompany it.
The product described by this document may contain “open source” software covered by the GNU General Public License or other open source license agreements. To find out which open source software is included in Brocade products, view the licensing terms applicable to the open source software, and obtain a copy of the programming source code, please visit http://www.brocade.com/support/oscd.
Solutions Lab Validation Test 3
Table of Contents Copyright ...................................................................................................................................................... 2
Preface ......................................................................................................................................................... 5
Document History .................................................................................................................................... 5
Overview .................................................................................................................................................. 5
Purpose of This Document ....................................................................................................................... 5
Audience .................................................................................................................................................. 5
Objective .................................................................................................................................................. 5
Related Documents .................................................................................................................................. 5
About Brocade .......................................................................................................................................... 5
About Tintri .............................................................................................................................................. 6
Configure DUT and Test Equipment ............................................................................................................. 6
Task 1. Brocade VCS Fabric Configuration ................................................................................................ 6
Task 2. Configure Tintri VMstore T620 Flash Array ................................................................................ 13
Task 3. Configure Virtualization Environment ........................................................................................ 21
VMware Environment Setup .............................................................................................................. 21
RHEV Environment Setup ................................................................................................................... 25
Hyper-‐V Environment Setup ............................................................................................................... 28
Tintri T620 Test Report ............................................................................................................................... 42
What’s New in This Report ..................................................................................................................... 42
Test Plan ................................................................................................................................................. 42
Scope ...................................................................................................................................................... 42
Test Configuration .................................................................................................................................. 42
DUT Descriptions .................................................................................................................................... 43
DUT Specifications .................................................................................................................................. 43
Test Equipment ...................................................................................................................................... 44
Test Cases ................................................................................................................................................... 44
1.1 Fabric Initialization – Base Functionality .................................................................................... 45
1.1.1 Storage Device – Physical and Logical Login with Speed Negotiation ................................ 45
1.1.2 NAS Connectivity ................................................................................................................ 46
1.1.3 vLAG Configuration ............................................................................................................. 47
1.2 ETHERNET STORAGE – ADVANCED FUNCTIONALTY ................................................................... 47
1.2.1 Storage Device – Jumbo Frame/MTU Size Validation ........................................................ 47
1.2.2 NAS Bandwidth Validation ................................................................................................. 48
1.2.3 Storage Device – w/Congested Fabric ................................................................................ 49
Solutions Lab Validation Test 4
1.2.4 Storage Device – NAS/CIFS Protocol Jammer Test Suite .................................................... 50
1.2.5 VDX Buffer Settings Validation ........................................................................................... 51
1.2.6 Storage Device Interface Monitoring Using MAPS ............................................................. 52
1.2.7 AMPP Feature Validation – Automatic Migration of Port Profiles ..................................... 54
1.3 STRESS & ERROR RECOVERY ....................................................................................................... 55
1.3.1 Storage Device Fabric IO integrity – Congested Fabric ....................................................... 55
1.3.2 Storage Device Integrity – Device Recovery from Port Toggle and Manual Cable Pull ...... 56
1.3.3 Storage Device Integrity – Device Recovery from Device Relocation ................................. 57
1.3.4 Storage Device Stress – Device Recovery from Device Port Toggle – Extended Run ......... 57
1.3.5 Storage Device Recovery – ISL Port Toggle – Extended Run .............................................. 58
1.3.6 Storage Device Recovery – ISL Port Toggle (Entire Switch) ................................................ 60
1.3.7 Storage Device Recovery – 8770 Director Blade Maintenance .......................................... 61
1.3.8 Storage Device Recovery – Switch Offline .......................................................................... 64
1.3.9 Storage Device Recovery – Switch Firmware Download HCL (Where Applicable) ............. 65
1.4 Optional/Additional Tests .......................................................................................................... 66
1.4.1 Storage device firmware update ........................................................................................ 66
1.4.2 Workload Simulation on Hyper-‐V and RHEV with Medusa ................................................ 67
1.4.3 Workload Simulation on VMware with VMware IOAnalyzer and Medusa ........................ 68
Test Conclusions ......................................................................................................................................... 69
Solutions Lab Validation Test 5
Preface
Document History Date Version Description 2016-‐03-‐28 1.0 Draft 1
Overview The Solid State Ready (SSR) program is a comprehensive testing and configuration initiative to validate the interoperability of Fibre Channel and IP flash storage with a Brocade network infrastructure. This program provides testing of multiple fabrics, heterogeneous servers, NICs and HBAs in a large port-‐count Brocade environment. The SSR qualification program will help verify seamless interoperability and optimum performance of solid state storage in Brocade FC and Ethernet fabrics.
Purpose of This Document The goal of this document is to demonstrate the compatibility of Tintri VMstore T620 array in a Brocade Ethernet fabric. This document provides a test report on the SSR qualification test plan executed on the Tintri VMstore T620 array connected to a Brocade VCS fabric running NOS v7.0.0.
Audience The target audience for this document includes storage administrators, solution architects, system engineers, and technical development representatives.
Objective 1. Test the Tintri VMstore T620 array with the Brocade VCS Ethernet fabric running NOS v7.0.0, for
different stress and error recovery scenarios, to validate the interoperability and integration of the Tintri array with Brocade VCS fabric.
2. Validate the performance of the Brocade VCS fabric in a solid state storage environment for high throughput and low latency applications.
Related Documents References • Brocade Network OS Administrator's Guide • Brocade Network OS Command Reference • Brocade VCS Fabric Design Guide • Brocade Network OS MAPS Administrator's Guide • Brocade Network OS Layer 2 Switching Configuration Guide
About Brocade Brocade®networking solutions empower the world's leading organizations to transition smoothly to a world where applications and information reside anywhere. By delivering agility and innovation for
Solutions Lab Validation Test 6
cloud-‐based environments, Brocade helps organizations modernize their networks and accelerate their journey to the New IP.
In particular, Brocade solutions for storage networking, data center routing, Software-‐Defined Networking (SDN), and Network Functions Virtualization (NFV) give organizations the power to capitalize on the unique business opportunities driven by virtualization and the cloud.
To deliver a best-‐in-‐class solution, Brocade partners with world-‐class IT companies around the globe. www.brocade.com.
About Tintri A new model for IT is here. Virtualized applications are the norm, and traditional storage is 20 years overdue for a shakeup. The mismatch between a dynamic virtualized environment and traditional storage products is forcing enterprises to be bound by tedious, blind solutions that don’t address the most important needs of IT – visibility into, and control of, the dynamic nature of data in a virtualized world. This is why Tintri decided to build smart storage that sees, learns, and easily adapts – enabling IT to focus on virtualized applications instead of managing storage infrastructure. After all, applications drive business and infrastructure exists to support the apps. This means it’s far more important for the IT teams to focus on performance, QoS, speed to deployment, and scalability of apps – instead of managing the storage infrastructure. Our application-‐aware storage is able to see how applications behave at the virtualization layer and present information in a way that’s useful to IT professionals. Equally important, every step in the Tintri experience is designed to be profoundly simple. But don’t take just our word for it – find out how our customers describe their experiences with Tintri.
Configure DUT and Test Equipment Some of the required and recommended configurations for the test bed systems are covered here.
Task 1. Brocade VCS Fabric Configuration 1. The Brocade VDX switches are configured to form a Brocade VCS fabric in a Logical Chassis cluster
mode. To form the fabric in Logical Chassis mode, each VDX switch is assigned a unique Rbridge-‐ID and an identical VCS-‐ID and the Logical Chassis mode is enabled on all switches. The cluster will auto-‐form after the switch reboot. <==========> switch# vcs vcsid 2 rbridge-id 55 logical-chassis enable # sh vcs Config Mode : Distributed VCS Mode : Logical Chassis VCS ID : 2 VCS GUID : 8d438961-3730-4cf3-828c-59ce47a32e6f Total Number of Nodes : 8 Rbridge-Id WWN Management IP VCS Status Fabric Status HostName
Solutions Lab Validation Test 7
-------------------------------------------------------------------------------------------------------------- 55 10:00:50:EB:1A:62:84:1F 10.38.66.55 Online Online VDX6740_066_055 56 10:00:50:EB:1A:62:89:AB 10.38.66.56 Online Online VDX6740_066_056 90 10:00:50:EB:1A:81:0A:78 10.38.66.90 Online Online VDX6940_066_090 111 10:00:50:EB:1A:22:27:DA 10.38.66.111 Online Online VDX6740_066_111 112 10:00:50:EB:1A:20:D3:81 10.38.66.112 Online Online VDX6740_066_112 119 10:00:50:EB:1A:62:83:7B 10.38.66.119 Online Online VDX6740_066_119 120 10:00:50:EB:1A:62:8C:33 10.38.66.120 Online Online VDX6740_066_120 126 >10:00:00:05:33:14:47:80* 10.38.66.126 Online Online VDX8770_066_126 <==========>
2. Configure a separate VLAN for the NAS traffic on the VCS fabric. Associate all host and target ports in
the network with this VLAN. In this setup VLAN 200 is used for NAS. < ========== > sw # conf t sw(config)# interface Vlan 200 sw(config-Vlan-200)# exit < ========== >
3. Configure Jumbo Frames on all host and target switch ports in the VCS fabric. <==========> # sh run int po 14 interface Port-channel 14 vlag ignore-split mtu 9216 switchport switchport mode trunk switchport trunk allowed vlan all switchport trunk tag native-vlan spanning-tree shutdown no shutdown <==========>
4. Enable “Auto-‐NAS” on the VCS fabric and setup a NAS server IP. Enabling Auto-‐NAS will auto-‐
configure the CEE map with the NAS traffic having a CoS value of “2”. <==========> # show running-config nas nas auto-qos ! nas server-ip 192.168.200.242/32 vlan 200 # show running-config cee-map
Solutions Lab Validation Test 8
cee-map default precedence 1 priority-group-table 1 weight 60 pfc on priority-group-table 15.0 pfc off priority-group-table 15.1 pfc off priority-group-table 15.2 pfc off priority-group-table 15.3 pfc off priority-group-table 15.4 pfc off priority-group-table 15.5 pfc off priority-group-table 15.6 pfc off priority-group-table 15.7 pfc off priority-group-table 2 weight 20 pfc off priority-group-table 3 weight 20 pfc off priority-table 2 2 3 2 1 2 2 15.0 remap fabric-priority priority 0 remap lossless-priority priority 0 <==========>
5. Configuring the Monitoring and Alerting Policy Suite (MAPS) network health monitoring feature.
-‐ Enable MAPS on each node in the VCS fabric.
< ========== > sw# show running-config rbridge-id 55 maps rbridge-id 55 maps enable policy dflt_aggressive_policy enable actions EMAIL,SW_CRITICAL,SW_MARGINAL,RASLOG email [email protected] relay x.x.x.x domainname brocade.com < ========== >
-‐ Configure the storage interfaces in the fabric as NAS or iSCSI depending on the storage type. The storage type is configured on the physical interface, even if that interface is a member of a LAG. < ========== > # sh run int te 111/0/30 interface TenGigabitEthernet 111/0/30 deviceconnectivity NAS channel-group 14 mode active type standard fabric isl enable fabric trunk enable lacp timeout long no shutdown < ========== >
6. Configuring AMPP for VMware ESX, Windows Hyper-‐V and RedHat RHEV virtualized environments.
• For VMware environment, the Virtual Machine MAC addresses and all the Virtual Switch and
Distributed Virtual Switch configuration can be auto-‐discovered by the Brocade VCS fabric by discovering the VMware vCenter to the VCS fabric. This will also auto generate the port-‐profiles based on the discovered Virtual Switch configurations.
Solutions Lab Validation Test 9
-‐ Enable CDP/LLDP on all the virtual switches (vSwitches) and distributed vSwitches (dvSwitches) in
the vCenter Inventory. -‐ Enter the configuration commands to discover the VMware vCenter Server from VCS fabric: <==========> switch # conf t switch(config)# vcenter ssr_cpil url https://10.38.53.191 username Administrator password **** switch(config)# vcenter ssr_cpil discovery ignore-delete-all-response 5 switch(config)# vcenter ssr_cpil activate switch# show vnetwork vcenter status vCenter Start Elapsed (sec) Status ================ ==================== ============== ================ ssr_cpil 2015-08-07 14:59:26 9 Success <==========> -‐ Verify the VCS fabric has discovered the necessary virtual assets information from the vCenter
and has created the “auto” port-‐profiles based on the vCenter port groups and associated the host and VM MAC addresses with the port-‐profiles.
<==========> # show running-config port-profile port-profile auto_ssr_cpil_datacenter-1666_DPortGroup-VLAN200 vlan-profile switchport switchport mode trunk switchport trunk allowed vlan add 200
........ port-profile auto_ssr_cpil_datacenter-1666_DPortGroup-VLAN200 activate port-profile auto_ssr_cpil_datacenter-1666_DPortGroup-VLAN200 static 0050.5664.68e3 port-profile auto_ssr_cpil_datacenter-1666_DPortGroup-VLAN200 static 0050.5669.8293 port-profile auto_ssr_cpil_datacenter-1666_DPortGroup-VLAN200 static 0050.566b.ee88 port-profile auto_ssr_cpil_datacenter-1666_DPortGroup-VLAN200 static 0050.566d.51fa port-profile auto_ssr_cpil_datacenter-1666_DPortGroup-VLAN200 static 0050.568e.4e1b <==========>
-‐ Apply the port-‐profile to all VDX ports attached to VMware hosts. AMPP can be configured over a standard LAG or vLAG in a manner similar to physical port.
<==========> # sh run int po 40 interface Port-channel 40
Solutions Lab Validation Test 10
vlag ignore-split speed 40000 port-profile-port domain default mtu 9216 no shutdown <==========> -‐ Verify the host and VM MAC addresses are present at the attached port, and that the port profile
is active. <==========> # show port-profile status Port-Profile PPID Activated Associated MAC Interface auto_ssr_cpil_datacenter-1666_DPortGroup-VLAN200 11 Yes 0050.5664.68e3 Po 11 0050.5669.8293 Po 41 0050.566b.ee88 Po 40 0050.566d.51fa Po 19 0050.568e.4e1b Po 41 # show mac-address-table port-profile Legend: Untagged(U), Tagged (T), Not Forwardable(NF) and Conflict(C) VlanId Mac-address Type State Port-Profile Ports 200 0050.566b.ee88 Dynamic Active Profiled(T) Po 40 200 0050.568e.4e1b Dynamic Active Profiled(T) Po 41 <==========>
• For a Windows Hyper-‐V and RedHat RHEV environments, the port-‐profiles need to be manually
created and activated.
-‐ Create and configure a new port-‐profile and the vlan sub-‐profile. Activate the profile and associate the profile to the MAC address for each host and VM.
<==========> Example Configuration: switch# configure terminal switch(config)# port-profile manual_tintri_hyperv_host switch(config-port-profile-manual_tintri_hyperv_vm)# vlan-profile switch(config-vlan-profile)# switchport switch(config-vlan-profile)# switchport mode access switch(config-vlan-profile)# switchport access vlan 200 switch(config)# port-profile manual_tintri_hyperv_host activate switch(config)# port-profile manual_tintri_hyperv_host static 000e.1e50.f8c0 <==========> -‐ Associate the port-‐profile with the “default” port-‐profile domain.
Solutions Lab Validation Test 11
<==========> switch(config)# port-profile-domain default switch(config-port-profile-domain-default)# port-profile manual_tintri_hyperv_host <==========> -‐ Verify the port-‐profile configuration. <==========> # show running-config port-profile port-profile manual_tintri_hyperv_host ppid 22 vlan-profile switchport switchport mode access switchport access vlan 200 port-profile manual_tintri_hyperv_vm ppid 23 vlan-profile switchport switchport mode access switchport access vlan 200 port-profile manual_tintri_rhev_host ppid 24 vlan-profile switchport switchport mode access switchport access vlan 200 port-profile manual_tintri_rhev_vm ppid 25 vlan-profile switchport switchport mode access switchport access vlan 200 …………………. port-profile manual_tintri_hyperv_host activate port-profile manual_tintri_hyperv_host static 000e.1e50.f8c0 port-profile manual_tintri_hyperv_host static 8c7c.ff24.a800 port-profile manual_tintri_hyperv_vm activate port-profile manual_tintri_hyperv_vm static 0015.5dc8.6712 port-profile manual_tintri_hyperv_vm static 0015.5dc8.6713 port-profile manual_tintri_hyperv_vm static 0015.5dc8.6714 port-profile manual_tintri_hyperv_vm static 0015.5dc8.6715 port-profile manual_tintri_hyperv_vm static 0015.5dc8.6815 port-profile manual_tintri_hyperv_vm static 0015.5dc8.6816 port-profile manual_tintri_hyperv_vm static 0015.5dc8.6817 port-profile manual_tintri_hyperv_vm static 0015.5dc8.6818 port-profile manual_tintri_rhev_host activate port-profile manual_tintri_rhev_host static 8c7c.ff22.f882 port-profile manual_tintri_rhev_host static 8c7c.ff22.f883 port-profile manual_tintri_rhev_host static 8c7c.ff4f.c702 port-profile manual_tintri_rhev_host static 8c7c.ff4f.c703 port-profile manual_tintri_rhev_vm activate
Solutions Lab Validation Test 12
port-profile manual_tintri_rhev_vm static 001a.4aec.f88b port-profile manual_tintri_rhev_vm static 001a.4aec.f88c port-profile manual_tintri_rhev_vm static 001a.4aec.f88e port-profile manual_tintri_rhev_vm static 001a.4aec.f891 port-profile manual_tintri_rhev_vm static 001a.4aec.f892 port-profile manual_tintri_rhev_vm static 001a.4aec.f893 port-profile manual_tintri_rhev_vm static 001a.4aec.f895 port-profile manual_tintri_rhev_vm static 001a.4aec.f897 # show running-config port-profile-domain default port-profile-domain default ........ port-profile manual_tintri_hyperv_host port-profile manual_tintri_hyperv_vm port-profile manual_tintri_rhev_host port-profile manual_tintri_rhev_vm <==========> -‐ Apply the port-‐profile to all VDX ports attached to Hyper-‐V and RHEV hosts. AMPP can be
configured over a standard LAG or vLAG in a manner similar to physical port. <==========> # sh run int po 18 interface Port-channel 18 vlag ignore-split port-profile-port domain default mtu 9216 no shutdown <==========>
-‐ Verify the host and VM MAC addresses are present at the attached port, and that the port profile
is active. <==========> # show port-profile status Port-Profile PPID Activated Associated MAC Interface manual_tintri_hyperv_host 22 Yes 000e.1e50.f8c0 Po 10 8c7c.ff24.a800 Po 18 manual_tintri_hyperv_vm 23 Yes 0015.5dc8.6712 Po 18 0015.5dc8.6713 None 0015.5dc8.6714 None 0015.5dc8.6715 None 0015.5dc8.6815 None 0015.5dc8.6816 Po 10 0015.5dc8.6817 Po 10 0015.5dc8.6818 None manual_tintri_rhev_host 24 Yes 8c7c.ff22.f882 Po 20 8c7c.ff4f.c702 Po 21
Solutions Lab Validation Test 13
manual_tintri_rhev_vm 25 Yes 001a.4aec.f88b Po 21 001a.4aec.f88c Po 21 001a.4aec.f88e Po 20 001a.4aec.f891 Po 20 001a.4aec.f892 Po 21 001a.4aec.f893 Po 21 001a.4aec.f895 Po 20 001a.4aec.f897 Po 20 # show mac-address-table port-profile Legend: Untagged(U), Tagged (T), Not Forwardable(NF) and Conflict(C) VlanId Mac-address Type State Port-Profile Ports 200 0050.5669.cae6 Dynamic Active Profiled(T) Po 11 200 0050.566c.0ee3 Dynamic Active Profiled(T) Po 23 200 0050.566d.4c66 Dynamic Active Profiled(T) Po 19 200 8c7c.ff22.f882 Dynamic Active Profiled(U) Po 20 200 8c7c.ff24.a800 Dynamic Active Profiled(U) Po 18 200 8c7c.ff4f.c702 Dynamic Active Profiled(U) Po 21 . . . . . . . . . <==========>
Task 2. Configure Tintri VMstore T620 Flash Array The VMstore T620 model array has an active-‐standby controller architecture and supports VMware, RHEV and Windows Hyper-‐V virtualized storage environments. 1. All usable capacity is part of a single pool that is presented as a NFS Datastore to the VMware ESX and
RHEV hypervisor hosts; and as a SMB share to the Windows Hyper-‐V hosts.
2. LACP is enabled for the data network to utilize available bandwidth and provide redundancy. The data ports on the array are aggregated across multiple switches in the VCS fabric to form a vLAG. The data network is configured as shown below:
Solutions Lab Validation Test 14
Figure 1 -‐ Tintri VMstore LACP Settings
Solutions Lab Validation Test 15
Figure 2 -‐ Tintri VMstore Controller A Network Configuration
Solutions Lab Validation Test 16
Figure 3 -‐ Tintri VMstore Controller B Network Configuration
Solutions Lab Validation Test 17
Figure 4 -‐ Tintri VMstore Data Network Settings
3. Configure the corresponding VDX switch ports connected to the active and standby array controllers
to form their respective port-‐channel groups.
Repeat the below steps for the 2nd port-‐channel with the standby controller ports. <==========> # sh run int te 111/0/32 interface TenGigabitEthernet 111/0/32 deviceconnectivity NAS channel-group 15 mode active type standard fabric isl enable fabric trunk enable lacp timeout long no shutdown ! # sh run int te 112/0/32 interface TenGigabitEthernet 112/0/32 deviceconnectivity NAS channel-group 15 mode active type standard fabric isl enable fabric trunk enable lacp timeout long
Solutions Lab Validation Test 18
no shutdown ! # sh run int po 15 interface Port-channel 15 vlag ignore-split mtu 9216 switchport switchport mode trunk switchport trunk allowed vlan all switchport trunk tag native-vlan spanning-tree shutdown no shutdown ! # show port-channel 15 LACP Aggregator: Po 15 (vLAG) Aggregator type: Standard Ignore-split is enabled Member rbridges: rbridge-id: 111 (1) rbridge-id: 112 (1) Admin Key: 0015 - Oper Key 0015 Partner System ID - 0xffff,90-e2-ba-6a-e3-c0 Partner Oper Key 0033 Member ports on rbridge-id 111: Link: Te 111/0/32 (0x6F0C040000) sync: 1 * Member ports on rbridge-id 112: Link: Te 112/0/32 (0x700C040000) sync: 1 <==========>
4. Add the VMware vCenter, RHEV Manager and Hyper-‐V host information to the “Hypervisor managers” settings on Tintri VMstore.
Solutions Lab Validation Test 19
Figure 5 -‐ Tintri VMware Hypervisor Manager Discovery
Solutions Lab Validation Test 20
Figure 6 -‐ Tintri Hyper-‐V Hypervisor Host Discovery
Solutions Lab Validation Test 21
Figure 7 -‐ Tintri RHEV Hypervisor Manager Discovery
5. No Protection or Replication policies are configured.
Task 3. Configure Virtualization Environment
VMware Environment Setup The steps to configure and setup the ESX cluster and the host networking are covered here. The VMware environment consists of four ESX hosts in a cluster. Each host in the cluster runs four VM’s with two VM’s per host running the Medual I/O tool and two VM’s running the VMware IOAnalyzer tool.
1. The VM network on the host is configured using a Distributed Switch.
-‐ Create a Distributed Switch and create a LACP LAG on the Distributed Switch. -‐ Add hosts to the Distributed Switch and associate the physical ports to the LACP uplinks. The
physical ports on the hosts are uplinked to separate switches in the VCS fabric. -‐ Configure the corresponding switch ports for the host uplinks to be in a LACP vLAG in the VCS
fabric. -‐ Create a Distributed Port Group and setup VMkernel adapters for each host in the port group and
establish connectivity with the Tintri NFS target IP.
Solutions Lab Validation Test 22
Figure 7 – vSphere Distributed Switch LACP Settings
Figure 8 -‐ Distributed Port Group policy
Solutions Lab Validation Test 23
Figure 9 -‐ Uplink Port Group policy
Figure 10 – vSphere Distributed Switch Topology
2. Connect the Tintri NFS datastore on all the ESX hosts in the cluster.
Solutions Lab Validation Test 24
Figure 11 -‐ Tintri NFS Datastore on VMware
Solutions Lab Validation Test 25
Figure 12 -‐ Tintri NFS Datastore and VMware Host Connectivity
RHEV Environment Setup The steps to configure and setup the RHEV environment and the host networking are covered here. The RHEV environment consists of two hosts running RHEV Hypervisor v6.5 in a “Default” cluster and a RHEV Manager hosted on RHEL v6.6 VM running RHEV Manager 3.4.5-‐0.3.el6ev. Each RHEV-‐H host runs four VM’s running the Medusa IO tool.
1. The RHEV Manager host needs to be subscribed to the appropriate RHEVM channels to obtain the
necessary packages for setting up the RHEV Manager application. Please refer to the “Red Hat Enterprise Virtualization 3.4 Installation Guide” for setup steps. <==========> # subscription-manager list +-------------------------------------------+ Installed Product Status +-------------------------------------------+ Product Name: Red Hat Enterprise Virtualization Product ID: 150 Version: 3.4 Arch: x86_64 Status: Subscribed Status Details: Starts: Ends: <==========>
Solutions Lab Validation Test 26
2. Add the RHEV Hypervisor hosts to the RHEV Manager. Configure logical networks on the RHEV
Manager and setup host networking as a LACP bond. -‐ The physical ports on the hosts are uplinked to separate switches in the VCS fabric. -‐ Configure the corresponding switch ports for the host uplinks to be in a LACP vLAG in the VCS
fabric.
Figure 13 -‐ RHEV Logical Network Properties
Solutions Lab Validation Test 27
Figure 14 -‐ RHEV host LACP bond setup
3. Create new storage domains connecting to the Tintri VMstore NFS share.
Solutions Lab Validation Test 28
Figure 15 -‐ RHEV Storage Domain Properties
Figure 16 -‐ RHEV Storage Domains
Hyper-‐‑V Environment Setup The steps to configure and setup the Hyper-‐V environment and the host networking are covered here. The Hyper-‐V environment consists of two Windows 2012 R2 Hyper-‐V hosts configured in a failover cluster and a management VM running Active Directory domain services. Each Hyper-‐V host runs four VM’s running the Medusa IO tool.
Solutions Lab Validation Test 29
1. Configure Hyper-‐V hosts and create a Failover Cluster: -‐ Install the Hyper-‐V role and the Failover Clustering feature on all the Hyper-‐V hosts.
-‐ Configure LACP team of two 10G data interfaces on the Hyper-‐V hosts. The physical ports on the hosts
are uplinked to separate switches in the VCS fabric. Configure the corresponding switch ports for the host uplinks to be in a LACP vLAG in the VCS fabric.
Figure 17 -‐ Hyper-‐V Host NIC Teaming
-‐ Create a “Data” and “Mgmt” Virtual Switch on both Hyper-‐V nodes. The “Data” virtual switch is connected to the 10G teamed interface and the “Mgmt” virtual switch is connected to the 1G out-‐of-‐band management interface.
Solutions Lab Validation Test 30
Figure 18 -‐ Hyper-‐V Virtual Switch Configuration
-‐ Join both hosts to the Active Directory domain and create a Failover Cluster with the two nodes. Setup cluster networking to allow cluster and client network communication on the 10G “Data” network.
Solutions Lab Validation Test 31
Figure 19 -‐ Failover Cluster Creation
Solutions Lab Validation Test 32
Figure 20 -‐ Failover Cluster Network Configuration
2. Configure the Tintri VMstore for connectivity with the Hyper-‐V cluster as per the “Tintri – SCVMM & Hyper-‐V Setup Guide”. The setup guide is available from the Tintri support portal.
-‐ Create the required DNS entries and Active Directory groups for Tintri VMstore and Hyper-‐V hosts. <==========> > nslookup tintri-data.ssrbrm.brocade.com Server: UnKnown Address: 192.168.200.254 Name: tintri-data.ssrbrm.brocade.com Address: 192.168.200.242 > nslookup tintri-01.ssrbrm.brocade.com Server: UnKnown Address: 192.168.200.254 Name: tintri-01.ssrbrm.brocade.com Address: 10.38.67.242 <==========>
Solutions Lab Validation Test 33
Figure 21 -‐ Tintri VMstore AD Configuration
Solutions Lab Validation Test 34
Figure 22 -‐ Tintri VMstore SMB Configuration
Solutions Lab Validation Test 35
Figure 23 -‐ Domain Group for all Tintri VMstore Computer Accounts
-‐ It is required to add the failover cluster computer account to the domain group containing all Hyper-‐V hosts to allow “live” VM migration between the nodes.
Solutions Lab Validation Test 36
Figure 24 -‐ Domain Group for Hyper-‐V Host Computer Accounts
Solutions Lab Validation Test 37
Figure 25 -‐ Configure Constrained Delegation for Hyper-‐V Host
Solutions Lab Validation Test 38
Figure 26 -‐ Hyper-‐V Host Administrator Access for Tintri VMstore
-‐ Allow unencrypted WinRM/WMI on each Hyper-‐V host. This is done by setting the AllowUnencrypted setting to ‘true’ under winrm/config/service.
<==========> PS C:\Users\add> winrm get winrm/config/service Service …….. MaxConcurrentOperations = 4294967295 MaxConcurrentOperationsPerUser = 1500 EnumerationTimeoutms = 600000 MaxConnections = 300 MaxPacketRetrievalTimeSeconds = 120 AllowUnencrypted = true <==========>
-‐ Grant recommended management access to the Tintri VMstore and Hyper-‐V host accounts.
Solutions Lab Validation Test 39
Figure 27 -‐ Management Access Configuration on Tintri VMstore
3. Install Tintri Hyper-‐V Services and Tintri PowerShell Toolkit package on the Hyper-‐V hosts to enable integration of Tintri storage with the Hyper-‐V environment.
4. Create an SMB share on the Tintri VMstore using the powershell cmdlets on the Hyper-‐V hosts and grant access to the share for all the Hyper-‐V host and cluster computer accounts. <==========> PS C:\Windows\system32> Connect-TintriServer -server tintri-01.ssrbrm.brocade.com -UserName admin -Password tintri HostNameOrIp : tintri-01.ssrbrm.brocade.com ApplianceHostName : tintri-01.ssrbrm.brocade.com Uuid : 0105-1429-181-PLT-000000000000000000000000000000000000000 ApiVersion : v310.21 ServerType : VMstore SessionId : JSESSIONID=6EAFEB56166BC9ABEBC8EBC8F690C20D; Path=/; Secure; HttpOnly ClientVersion : 2.5.0.1-587 SessionStartZoned : 2016-03-01T08:59:16 Mountain Standard Time (-07) SessionStartLocal : 3/1/2016 8:59:16 AM LastUsed : 3/1/2016 8:59:25 AM SessionDuration : 0:00:00:08 (d:hh:mm:ss) ApiMethodCallCount : 3 IsDefaultServer : True PS C:\Windows\system32> New-TintriSmbShare -Name DataStore -Comment "For test VMs" Name : DataStore Path : /tintri/.tintri-smb/DataStore
Solutions Lab Validation Test 40
Comment : For test VMs Server : tintri-01.ssrbrm.brocade.com PS C:\Windows\system32> Grant-TintriSmbShareAccess -Name DataStore -User 'ssrbrm\ssr067127$' -Access FullControl AceId : S-1-5-21-296897355-2803942890-1855500782-1603_2032127 User : SSRBRM\ssr067127$ SID : S-1-5-21-296897355-2803942890-1855500782-1603 AccessControlType : FULL_CONTROL AccessMask : 2032127 Share : DataStore Server : tintri-01.ssrbrm.brocade.com PS C:\Windows\system32> Grant-TintriSmbShareAccess -Name DataStore -User 'ssrbrm\ssr067129$' -Access FullControl AceId : S-1-5-21-296897355-2803942890-1855500782-1604_2032127 User : SSRBRM\ssr067129$ SID : S-1-5-21-296897355-2803942890-1855500782-1604 AccessControlType : FULL_CONTROL AccessMask : 2032127 Share : DataStore Server : tintri-01.ssrbrm.brocade.com PS C:\Windows\system32> Grant-TintriSmbShareAccess -Name DataStore -User 'ssrbrm\tintri-hyperv-c$' -Access FullControl AceId : S-1-5-21-296897355-2803942890-1855500782-1609_2032127 User : SSRBRM\tintri-hyperv-c$ SID : S-1-5-21-296897355-2803942890-1855500782-1609 AccessControlType : FULL_CONTROL AccessMask : 2032127 Share : DataStore Server : tintri-01.ssrbrm.brocade.com PS C:\Windows\system32> Get-TintriSmbShareAccess -name DataStore AceId : S-1-5-32-544_2032127 User : BUILTIN\Super Admins SID : S-1-5-32-544 AccessControlType : FULL_CONTROL AccessMask : 2032127 Share : DataStore Server : tintri-01.ssrbrm.brocade.com AceId : S-1-5-21-296897355-2803942890-1855500782-1603_2032127 User : SSRBRM\SSR067127$ SID : S-1-5-21-296897355-2803942890-1855500782-1603 AccessControlType : FULL_CONTROL AccessMask : 2032127 Share : DataStore Server : tintri-01.ssrbrm.brocade.com AceId : S-1-5-21-296897355-2803942890-1855500782-1604_2032127 User : SSRBRM\SSR067129$ SID : S-1-5-21-296897355-2803942890-1855500782-1604 AccessControlType : FULL_CONTROL AccessMask : 2032127 Share : DataStore Server : tintri-01.ssrbrm.brocade.com
Solutions Lab Validation Test 41
AceId : S-1-5-21-296897355-2803942890-1855500782-1609_2032127 User : SSRBRM\Tintri-HyperV-C$ SID : S-1-5-21-296897355-2803942890-1855500782-1609 AccessControlType : FULL_CONTROL AccessMask : 2032127 Share : DataStore Server : tintri-01.ssrbrm.brocade.com
<==========> 5. Set the UNC mount path to the Tintri VMstore file share as the storage location for Virtual Machine
and Virtual Hard Disk files in the Hyper-‐V Settings of each Hyper-‐V node.
Figure 28 -‐ Hyper-‐V Storage Location Pointing to Tintri VMstore UNC Path
Solutions Lab Validation Test 42
Tintri T620 Test Report
What’s New in This Report 1. Array firmware version under test is 4.1.0.7 2. Brocade Network Operating System (NOS) version under test is v7.0.0 3. Emulex, QLogic, and Broadcom adapters have updated firmware and drivers 4. Test bed includes addition of 40Gb Ethernet NIC’s from Intel and Mellanox See DUT Description tables below for detailed information. Previous version of test report covering Brocade NOS v6.0.1 and Tintri array firmware 3.2.1.4 is available here: Brocade VCS Fabric Technology with Tintri VMstore T620 Validation Test Report
Test Plan The Tintri VMstore T620 array is deployed as a single NFS datastore and attached to VMware, Hyper-‐V and RHEV virtualization environments. All hosts and storage connect via a Brocade VCS Fabric in Logical Cluster mode.
Scope Testing focuses on interoperability of the Tintri storage array and determining an optimal configuration for performance and availability. Testing covers various I/O stress and error handling scenarios. Performance is observed within the context of best practice fabric configuration; however absolute maximum benchmark reporting of storage performance is beyond the scope of this test. Details of the test steps are covered under “Test Case Descriptions” section. Standard test equipment includes IBM/HP/Dell chassis server hosts with Brocade/QLogic/Emulex/Intel/Broadcom CNA’s and NIC’s with two uplinks from every host to the Brocade VCS Fabric. IO generator tools included Medusa and VMware IOAnalyzer workload generator. The testing is performed with the Tintri VMStore T620 array as a NFS datastore attached to a VMware ESX and RHEV virtualization environments and as a SMB file share attached to a Hyper-‐V environment.
Test Configuration
Solutions Lab Validation Test 43
Figure 29 -‐ Test Configuration
DUT Descriptions The following lists the devices under test (DUT) and the test equipment used. Storage Array DUT ID Model Vendor Description Tintri T620 VMStore T620 Tintri Dual Controller array dedicated for virtual
environments. Supports NFSv3 for VMware and RHEV and SMB3 for Hyper-‐V. Controllers are in active/standby configuration. Each controller has 2x10GbE ports.
Switches DUT ID Model Vendor Description VDX 8770 (sw-‐1) VDX 8770-‐4 Brocade Ethernet director chassis with 1 x 48-‐port 10GbE
card and 1 x 27-‐port 40GbE card VDX 6740 (sw-‐2..7) VDX 6740 Brocade 64 port 10Gb switch (48x10Gb + 4x40Gb) VDX 6940 (sw-‐8) VDX 6940 Brocade 36 port 40Gb switch
DUT Specifications Storage Version
Solutions Lab Validation Test 44
Tintri VMstore T620 4.1.0.7 Brocade switches Version Licensing VDX 6740 NOS v7.0.0 10G Upgrade; 40G Upgrade VDX 6940 NOS v7.0.0 40G Upgrade VDX 8770-‐4 NOS v7.0.0 10G Upgrade; 40G Upgrade LC48X10G NOS v7.0.0 48-‐port 10GE card LC27X40G NOS v7.0.0 27-‐port 40GE card
Adapters Version Adapters Version Mellanox MT27500 ConnectX-‐3 2-‐port 40GbE NIC driver 3.2.0.15 Intel XL710 2-‐port 40GbE NIC driver 1.4.26 QLogic QLE8442 2-‐port CNA driver 1.710.51 QLogic 1860 2-‐port 10GbE CNA driver 3.2.6.0 Emulex OCe14102-‐UM 2-‐port CNA driver 10.6.163.0 Intel X520-‐SR2 driver 3.9.58.9101
DUT ID Servers RAM Processor OS SRV-‐1 Dell PowerEdge R730 32GB Intel Xeon E5-‐2680v3 VMware ESXi 6.0.0u1a SRV-‐2 HP Proliant DL380p Gen8 32GB Intel Xeon E5-‐2690v2 VMware ESXi 6.0.0u1a SRV-‐3 HP Proliant DL360 Gen7 24GB Intel Xeon E5645 VMware ESXi 6.0.0u1a SRV-‐4 HP Proliant DL360 Gen7 24GB Intel Xeon E5645 VMware ESXi 6.0.0u1a SRV-‐5 IBM System x3630 M4 8GB Intel Xeon E5-‐2403 RHEV Hypervisor v6.5
(6.5 -‐ 20150115.0.el6ev) SRV-‐6 IBM System x3550 M4 8GB Intel Xeon E5-‐2620 RHEV Hypervisor v6.5
(6.5 -‐ 20150115.0.el6ev) SRV-‐7 Virtual Machine – RHEV
Manager 4GB Virtual CPU RHEV Manager v3.4
(3.4.5-‐0.3.el6ev) SRV-‐8, 9 HP Proliant DL360 Gen7 24GB Intel Xeon E5645 Windows Hyper-‐V 2012
R2 SRV-‐10 Virtual Machine -‐-‐
Domain Controller 4GB Virtual CPU Windows Server 2012 R2
Test Equipment Device/Software Tools Version JDSU Analyzer/Jammer Finisar Xgig 10GbE VMWare IOAnalyzer 1.6.2 Medusa Labs Test Tools 7.2.0.169914
Test Cases
Solutions Lab Validation Test 45
These test cases are designed to verify basic and advanced functionality features between the Brocade VCS fabric and Tintri T620 storage array and host devices, to stress all devices, and to confirm successful error recovery. 1.1 FABRIC INITIALIZATION – BASE FUNCTIONALITY 1.1.1 Storage Device – Physical and Logical Login with Speed Negotiation 1.1.2 NAS Connectivity 1.1.3 vLAG Configuration 1.2 ETHERNET STORAGE – ADVANCED FUNCTIONALTY 1.2.1 Storage Device – Jumbo Frame/MTU Size Validation 1.2.2 NAS Bandwidth Validation 1.2.3 Storage Device – w/Congested Fabric 1.2.4 Storage Device – NAS/CIFS Protocol Jammer Test Suite 1.2.5 VDX Buffer Settings Validation 1.2.6 Storage Device Interface Monitoring Using MAPS 1.2.7 AMPP Feature Validation – Automatic Migration of Port Profiles 1.3 STRESS & ERROR RECOVERY 1.3.1 Storage Device Fabric IO integrity – Congested Fabric 1.3.2 Storage Device Integrity – Device Recovery from Port Toggle – Manual Cable Pull 1.3.3 Storage Device Integrity – Device Recovery from Device Relocation 1.3.4 Storage Device Stress – Device Recovery from Device Port Toggle – Extended Run 1.3.5 Storage Device Recovery – ISL Port Toggle – Extended Run 1.3.6 Storage Device Recovery – ISL Port Toggle (entire switch) 1.3.7 Storage Device Recovery – VDX8770 Director Blade Maintenance 1.3.8 Storage Device Recovery – Switch Offline 1.3.9 Storage Device Recovery – Switch Firmware Download 1.4 Optional/Additional Tests 1.4.1 Storage device firmware update 1.4.2 Workload Simulation on Hyper-‐V and RHEV with Medusa 1.4.3 Workload Simulation on VMware with VMware IOAnalyzer
1.1 Fabric Initialization – Base Functionality
1.1.1 Storage Device – Physical and Logical Login with Speed Negotiation Test Objective 1. Verify device login to VDX switch with all supported speed settings.
a. Configure VDX switch for AUTONAS b. Configure Storage Port for NAS connectivity. Validate Login & base connectivity.
Procedure Test Execution: 1. Setup IP addresses on the host and the storage target ports. 2. Setup appropriate VLANs on the switch fabric to isolate NAS traffic and associate the hosts and storage
to this VLAN.
Solutions Lab Validation Test 46
3. Enable Auto-‐NAS on the VCS fabric and set NAS Server-‐IP <==========> (config)# nas auto-qos (config)# nas server-ip 192.168.8.242/32 vlan 8 <==========>
4. Change switch port speed to Auto and 10G. [Setting speed to 1G requires supported SFP.] Result Validation: 1. Validate link states on the array and IP connectivity between the array and hosts. 2. Check switch port status and the “actual” and “configured” link speed.
<==========> show interface tengigabitethernet X # show interface te 111/0/30 TenGigabitEthernet 111/0/30 is up, line protocol is up (connected) ß …………………… LineSpeed Actual : 10000 Mbit ß LineSpeed Configured : Auto, Duplex: Full ß <==========>
Result 1. PASS. Test Passed. IP connectivity and link speed negotiation verified.
1.1.2 NAS Connectivity Test Objective 1. Verify host to File Share connectivity with CIFS & NFS with multiple simultaneous connections. Procedure Test Execution: 1. Establish IP connectivity between host and array. 2. Create Host Groups and Shares on the array with read/write access to host IP. 3. Discover the shares on the hosts and start read/write I/O. Result Validation: 1. Verify host can connect to share and has read/write access by checking the I/O stats on host. 2. Verify Auto-‐NAS operation by checking the statistics for the configured nas server-‐ip
<==========> show nas statistics all | server-‐ip ip_addr/prefix [ vlan VLAN_id | vrf VRF_name ] [ rbridge-‐id rbridge-‐id ] # show nas statistics server-ip 192.168.200.242/32 vlan 200 Rbridge 111 ----------- Server ip 192.168.200.242/32 vlan 200 matches 104894419 packets ß Rbridge 112 ----------- Server ip 192.168.200.242/32 vlan 200
Solutions Lab Validation Test 47
matches 195243501 packets ß <==========>
Result 1. PASS. Able to perform read/write operations to the Tintri datastore. Verified auto-‐NAS is working.
1.1.3 vLAG Configuration Test Objective 1. Configure vLAG connectivity from Storage Ports to 2 separate VDX switches. 2. Verify data integrity through vLAG. Procedure Test Execution: 1. Create vLAG between VDX switches and storage ports. Result Validation: 1. Validate vLAG formation.
<==========> show port-‐channel [ channel-‐group-‐number | detail | load-‐balance | summary ] # show port-channel 3 LACP Aggregator: Po 3 (vLAG) Aggregator type: Standard Ignore-split is enabled Member rbridges: rbridge-id: 111 (1) rbridge-id: 112 (1) Admin Key: 0003 - Oper Key 0003 Partner System ID - 0xffff,90-e2-ba-6a-e3-c0 Partner Oper Key 0033 Member ports on rbridge-id 111: Link: Te 111/0/32 (0x6F1810001F) sync: 1 ß Member ports on rbridge-id 112: Link: Te 112/0/32 (0x701810001F) sync: 1 * ß <==========>
2. Verify IP connectivity between host and storage and host access to storage share. Result 1. PASS. vLAG formed successfully. IP connectivity between host and storage verified.
1.2 ETHERNET STORAGE – ADVANCED FUNCTIONALTY
1.2.1 Storage Device – Jumbo Frame/MTU Size Validation Test Objective 1. Perform IO validation testing while incrementing MTU Size from minimum to maximum with
reasonable increments. 2. Include Jumbo Frame size as well as maximum negotiated/supported between device and switch.
Solutions Lab Validation Test 48
Procedure Test Execution: 1. MTU on the storage interfaces is set with Enable/Disable Jumbo frames switch. Enable = 9000; Disable
= 1500. 2. Start large and small block I/O and verify all operations complete successfully. Result Validation: 1. Check the host and storage logs for any errors. 2. Check the switch port statistics for the storage device and verify packet sizes.
<==========> show interface tengigabitethernet X # show interface te 111/0/32 TenGigabitEthernet 111/0/32 is up, line protocol is up (connected) ………. MTU 9216 bytes ß ……… Receive Statistics: 80504790 packets, 61916324392 bytes Unicasts: 80491357, Multicasts: 13433, Broadcasts: 0 64-byte pkts: 1341245, Over 64-byte pkts: 4427169, Over 127-byte pkts: 37202150 Over 255-byte pkts: 1914873, Over 511-byte pkts: 587932, Over 1023-byte pkts: 345721 Over 1518-byte pkts(Jumbo): 34685700 ß <==========>
Result 1. PASS. Verified I/O completed without issues at all configured MTU sizes.
1.2.2 NAS Bandwidth Validation Test Objective 1. Validate maximum sustained bandwidth to storage port via NAS/CiFS. 2. After 15 minutes Verify IO completes error free. Procedure Test Execution: 1. Start NFS I/O to the storage array from multiple connected hosts. Result Validation: 1. Check the host and storage logs for any errors. 2. Check the switch logs and interface stats for any errors.
<==========> # show logging raslog # show interface stats detail interface tengigabitethernet X # show interface stats detail interface TenGigabitEthernet 111/0/30
Solutions Lab Validation Test 49
Interface TenGigabitEthernet 111/0/30 statistics (ifindex 477145006109) RX TX Packets 0 855 Bytes 0 202283 Unicasts 0 0 Multicasts 0 119 Broadcasts 0 736 à Errors 0 0 à Discards 0 0 à Overruns 0 Underruns 0 à Runts 0 à Jabbers 0 à CRC 0 <==========>
3. Check I/O initiator tool logs to verify I/O runs without errors. Result 1. PASS. All I/O operations completed without errors. All validation checks passed.
1.2.3 Storage Device – w/Congested Fabric Test Objective 1. Create network bottleneck through a single Fabric ISL. 2. Configure multiple ‘iSCSI/NAS to host’ data streams sufficient to saturate the ISL’s available
bandwidth for 30 minutes. 3. Verify IO completes error free. Procedure Test Execution: 1. Start NFS I/O to the storage array from each of the virtualization environments. 2. Disable redundant ISL links in the VCS fabric to isolate a single ISL. Result Validation: 1. Check the host and storage logs for any errors. 2. Verify the link congestion and check the switch logs for any errors.
<==========> # show logging raslog # show interface stats detail interface tengigabitethernet X # show interface stats detail interface TenGigabitEthernet 111/0/30 Interface TenGigabitEthernet 111/0/30 statistics (ifindex 477145006109) RX TX Packets 0 33712 Bytes 0 7779565 Unicasts 0 0
Solutions Lab Validation Test 50
Multicasts 0 4967 Broadcasts 0 28745 à Errors 0 0 à Discards 0 0 à Overruns 0 Underruns 0 à Runts 0 à Jabbers 0 à CRC 0 ………………………………………….. à Mbits/Sec 0.000000 0.000257 à Packet/Sec 0 0 à Line-rate 0.00% 0.00% <==========>
3. Check I/O initiator tool logs to verify I/O runs without errors. Result PASS. I/O completed successfully on all hosts. All validation checks passed.
1.2.4 Storage Device – NAS/CIFS Protocol Jammer Test Suite Test Objective 1. Perform Protocol Jammer Tests such as:
-‐ CRC corruption, -‐ packet corruption, -‐ missing frame, -‐ host error recovery, -‐ target error recovery
Procedure Test Execution: 1. Insert Jammer device in the I/O path on the storage end. 2. Start a 50% mix of read-‐write I/O. 3. Execute the following Jammer scenarios:
-‐ CRC corruption -‐ Drop packets to and from the target. -‐ Drop Read/Write calls from host to target. -‐ Replace IDLE with Pause Frame
Result Validation: 1. Check the host and storage logs for any errors. 2. Check the switch logs and interface stats for any errors.
<==========> # show logging raslog # show interface stats detail interface tengigabitethernet X # show interface stats detail interface TenGigabitEthernet 111/0/30
Solutions Lab Validation Test 51
Interface TenGigabitEthernet 111/0/30 statistics (ifindex 477145006109) RX TX Packets 0 855 Bytes 0 202283 Unicasts 0 0 Multicasts 0 119 Broadcasts 0 736 à Errors 0 0 à Discards 0 0 à Overruns 0 Underruns 0 à Runts 0 à Jabbers 0 à CRC 0 <==========>
3. Verify Jammer operations and recovery with Analyzer.
Result 1. PASS. I/O recovered in all instances after the jammer operations.
1.2.5 VDX Buffer Settings Validation Test Objective 1. Perform IO validation testing with Ingress and Egress buffering enabled on the VCS fabric. 2. Verify performance impact with default, recommended and maximum buffer limit settings.
Procedure Test Execution: 1. Set buffer limits to default (Rx = 2048; Tx = 1024 KB), recommended 2 MB and maximum 8 MB for
ingress and egress queues on all switches in the VCS fabric.
Default settings example; complete for each switch (rbridge-‐id) in the fabric: < ========== > VDX6740_066_126# conf t VDX6740_066_126(config)# rbridge-‐id 111 VDX6740_066_126(config-‐rbridge-‐id-‐111)# qos rcv-‐queue limit 2048 VDX6740_066_126(config-‐rbridge-‐id-‐111)# qos tx-‐queue limit 1024 < ========== > ‘Recommended’ settings: < ========== > VDX6740_066_126(config-‐rbridge-‐id-‐111)# qos rcv-‐queue limit 2048 VDX6740_066_126(config-‐rbridge-‐id-‐111)# qos tx-‐queue limit 2048 < ========== > Maximum settings: < ========== > VDX6740_066_126(config-‐rbridge-‐id-‐111)# qos rcv-‐queue limit 8000 VDX6740_066_126(config-‐rbridge-‐id-‐111)# qos tx-‐queue limit 8000
Solutions Lab Validation Test 52
< ========== >
2. Start large and small block I/O from multiple initiator and verify all operations complete successfully with no packet discards.
Result Validation: 1. Check the host and storage logs for any errors. 2. Check the switch port statistics for the storage device and verify no packet drops. 3. Compare I/O latency times under different buffer limit settings. Result PASS. I/O completes without errors on each buffer configuration. A range of traffic including read, write, mixed, at varied block sizes execute with similar latency across each buffer configuration.
1.2.6 Storage Device Interface Monitoring Using MAPS Test Objective 1. Verify storage target ports are being monitored and any errors are reported accurately. Procedure Test Execution: 1. Enable MAPS monitoring on all switches in the VCS fabric. 2. Designate the storage target ports to the iSCSI/NAS group monitoring policy. 3. Start I/O from hosts to the storage. 4. Monitor switch logs and MAPS dashboard for any warnings associated with the storage ports. Result Validation: 1. Check switch error logs and MAPS dashboard for warnings.
<==========> # show logging raslog [MAPS-1003], 25279, SW/0 | Active, WARNING, VDX6740_066_112, Eth Port 112/0/45, Condition=ALL_ETH_PORTS(RX_SYM_ERR/min>0), Current Value:[RX_SYM_ERR,352 Errors], RuleName=defALL_ETH_PORTS_RX_SYM_ERR_0, Dashboard Category=Port Health. VDX6740_066_112# show maps dashboard rbridge-id 55 ---------------------------------------------------------------------------------------- Dashboard for RbridgeId 55 ---------------------------------------------------------------------------------------- 1 Dashboard Information: ======================= DB start time : Fri Aug 7 06:36:37 2015 2 Switch Health Report:
Solutions Lab Validation Test 53
======================= Current Switch Policy Status: HEALTHY 3.1 Summary Report: =================== Category |Today |Last 7 days | -------------------------------------------------------------------------------- Port Health |Out of operating range |In operating range | Fru Health |In operating range |In operating range | Security Violations |In operating range |In operating range | Switch Resource |In operating range |In operating range | 3.2 Rules Affecting Health: =========================== Category(Rule Count)|RepeatCount|Rule Name |Execution Time |Object |Triggered Value(Units)| ________________________________________________________________________________________________________________________ Port Health(1) |1 |defALL_ETH_PORTS_CRCALN_6 |08/07/15 14:01:20|Eth Port 55/0/48 |7 CRCs | 3.3 History Data: =============== Stats(Units) Current --/--/-- --/--/-- --/--/-- --/--/-- --/--/-- --/--/-- Port(val) Port(val) Port(val) Port(val) Port(val) Port(val) Port(val) ---------------------------------------------------------------------------------------------------------------------------------------------------------------- CRCALN(CRCs) 55/0/48(7) - - - - - - 55/0/49(5) - - - - - - 55/0/47(3) - - - - - - RX_ABN_FRAME(Errors) 55/0/49(1) - - - - - - RX_SYM_ERR(Errors) 55/0/49(>9999) - - - - - - 55/0/46(>9999) - - - - - - 55/0/41(>9999) - - - - - -
Solutions Lab Validation Test 54
55/0/47(4585) - - - - - - 55/0/34(1088) - - - - - - RX_IFG(IFGs) - - - - - - -
Result PASS. Test Passed. Storage ports are accurately monitored with MAPS. Notifications are delivered via RAS log, MAPS Dashboard, and email.
1.2.7 AMPP Feature Validation – Automatic Migration of Port Profiles Test Objective Validate AMPP feature functionality and verify VM migration works seamlessly with host uplink ports configured using AMPP. Procedure Test Execution: 1. Configure AMPP on the VCS fabric for VMware, Hyper-‐V and RHEV environments as per “Step 6” under
the Brocade VCS Fabric Configuration section. 2. Start I/O from a VM in each environment to the storage and perform a live migration of the VM. 3. Verify VM migration completes successfully and the VM MAC address has re-‐associated with the
destination host interface on the same port-‐profile. Result Validation: 1. Verify VM migration completes successfully without any I/O interruption. 2. Check port-‐profile status to verify VM MAC has re-‐associated with the correct target physical port and
port-‐profile. <==========> # show port-profile status Port-Profile PPID Activated Associated MAC Interface auto_ssr_cpil_datacenter-1666_DPortGroup-VLAN200 9 Yes 0050.5661.bbc2 Po 24 0050.5669.cae6 Po 11 0050.566c.0ee3 Po 23 0050.566d.4c66 Po 19 manual_tintri_hyperv_vm 23 Yes 0015.5dc8.6712 Po 18 0015.5dc8.6816 Po 10 0015.5dc8.6817 Po 10 manual_tintri_rhev_vm 25 Yes 001a.4aec.f88b Po 21 001a.4aec.f88c Po 21 001a.4aec.f88e Po 20 <==========> Result
Solutions Lab Validation Test 55
PASS. Test Passed. VM migration succeeded without any issues and the VM MAC re-‐associated with the target host port-‐profile.
1.3 STRESS & ERROR RECOVERY
1.3.1 Storage Device Fabric IO integrity – Congested Fabric Test Objective 1. From all available initiators, start a mixture of READ/WRITE/VERIFY traffic with random data
patterns continuously to all their targets overnight. 2. Verify no host application failover or unexpected change in I/O throughput occurs. 3. Configure fabric & devices for maximum link & device saturation. 4. Include both iSCSI & NAS/CIFS traffic. (if needed -‐-‐ add L2 Ethernet traffic to fill available bandwidth) Procedure Test Execution: 1. Start NFS I/O to the storage array from each of the virtualization environments. 2. Setup a mix of READ/WRITE traffic. Result Validation: 1. Check the host and storage logs for any errors. 2. Verify the link congestion and check the switch logs for any errors.
<==========> # show logging raslog # show interface stats detail interface tengigabitethernet X # show interface stats detail interface TenGigabitEthernet 111/0/30 Interface TenGigabitEthernet 111/0/30 statistics (ifindex 477145006109) RX TX Packets 0 33712 Bytes 0 7779565 Unicasts 0 0 Multicasts 0 4967 Broadcasts 0 28745 à Errors 0 0 à Discards 0 0 à Overruns 0 Underruns 0 à Runts 0 à Jabbers 0 à CRC 0 ………………………………………….. à Mbits/Sec 0.000000 0.000257 à Packet/Sec 0 0 à Line-rate 0.00% 0.00% <==========>
3. Check I/O generator tool logs to verify I/O runs without errors.
Solutions Lab Validation Test 56
Result 1. PASS. All I/O completed without errors. All validation checks passed.
1.3.2 Storage Device Integrity – Device Recovery from Port Toggle and Manual Cable Pull Test Objective 1. With I/O running, perform a quick port toggle every Storage Device & Adapter port. 2. Verify host I/O will recover. 3. Sequentially performed for each Storage Device & Adapter port. Procedure Test Execution: 1. Setup multipath on host and start I/O. 2. Perform multiple iterations of sequential port toggles across initiator and target switch ports. Result Validation: 1. Check switch port status after toggle and for any errors in the switch error logs.
<==========> # show logging raslog # show interface tengigabitethernet X # show interface te 111/0/30 TenGigabitEthernet 111/0/30 is up, line protocol is up (connected) ß …………………… LineSpeed Actual : 10000 Mbit ß LineSpeed Configured : Auto, Duplex: Full ß <==========>
2. Check port-‐channel status for the storage target port-‐channel after the port toggle to verify the port rejoins the port group. <==========> show port-‐channel [ channel-‐group-‐number | detail | load-‐balance | summary ] # show port-channel 3 LACP Aggregator: Po 3 (vLAG) Aggregator type: Standard Ignore-split is enabled Member rbridges: rbridge-id: 111 (1) rbridge-id: 112 (1) Admin Key: 0003 - Oper Key 0003 Partner System ID - 0xffff,90-e2-ba-6a-e3-c0 Partner Oper Key 0033 Member ports on rbridge-id 111: Link: Te 111/0/32 (0x6F1810001F) sync: 1 ß Member ports on rbridge-id 112: Link: Te 112/0/32 (0x701810001F) sync: 1 * ß <==========>
3. Check host and storage error logs. Validate link states on the array and IP connectivity between the array and hosts.
Solutions Lab Validation Test 57
Result 1. PASS. I/O failed over and recovered successfully. All validation checks passed.
1.3.3 Storage Device Integrity – Device Recovery from Device Relocation Test Objective 1. With I/O running, manually disconnect and reconnect port to different switch in same fabric. 2. Verify host I/O will failover to alternate path and toggled path will recover. 3. Sequentially performed for each Storage Device & Adapter port. 4. Repeat test for all switch types. Procedure Test Execution: 1. Setup multipath on host and start I/O 2. Move storage target ports to different switch ports in the fabric. 3. Ensure the new switch ports in the fabric have the same configuration as the existing port connecting
the storage target. Result Validation: 1. Check for any errors in the switch error logs and the switch port status at the new switch port.
<==========> # show logging raslog # show interface tengigabitethernet X # show interface te 111/0/32 TenGigabitEthernet 111/0/32 is up, line protocol is up (connected) ß …………………… LineSpeed Actual : 10000 Mbit ß LineSpeed Configured : Auto, Duplex: Full ß <==========>
2. Check host and storage error logs. Validate link states on the array and IP connectivity between the array and hosts.
Result 1. PASS. I/O failed over and recovered successfully. All validation checks passed.
1.3.4 Storage Device Stress – Device Recovery from Device Port Toggle – Extended Run Test Objective 1. Sequentially toggle each Initiator and Target ports in fabric. 2. Verify host I/O will recover to alternate path and toggled path will recover. 3. Run for 24 hours. Procedure Test Execution: 1. Setup multipath on host and start I/O 2. Perform multiple iterations of sequential port toggles across initiator and target switch ports.
Solutions Lab Validation Test 58
Result Validation: 1. Check switch port status after toggle and for any errors in the switch error logs.
<==========> # show logging raslog # show interface tengigabitethernet X # show interface te 111/0/30 TenGigabitEthernet 111/0/30 is up, line protocol is up (connected) ß …………………… LineSpeed Actual : 10000 Mbit ß LineSpeed Configured : Auto, Duplex: Full ß <==========>
2. Check port-‐channel status for the storage target port-‐channel after the port toggle to verify the port rejoins the port group. <==========> show port-‐channel [ channel-‐group-‐number | detail | load-‐balance | summary ] # show port-channel 3 LACP Aggregator: Po 3 (vLAG) Aggregator type: Standard Ignore-split is enabled Member rbridges: rbridge-id: 111 (1) rbridge-id: 112 (1) Admin Key: 0003 - Oper Key 0003 Partner System ID - 0xffff,90-e2-ba-6a-e3-c0 Partner Oper Key 0033 Member ports on rbridge-id 111: Link: Te 111/0/32 (0x6F1810001F) sync: 1 ß Member ports on rbridge-id 112: Link: Te 112/0/32 (0x701810001F) sync: 1 * ß <==========>
3. Check host and storage error logs. Validate link states on the array and IP connectivity between the array and hosts.
Result 1. PASS. I/O failed over and recovered successfully. All validation checks passed.
1.3.5 Storage Device Recovery – ISL Port Toggle – Extended Run Test Objective 1. Sequentially toggle each ISL path on all switches. Host I/O may pause, but should recover. 2. Verify fabric ISL path redundancy between hosts & storage devices. 3. Verify host I/O throughout test. Procedure Test Execution: 1. Setup host multipath with links on different switches in the VCS fabric and start I/O.
Solutions Lab Validation Test 59
2. Ensure ISL redundancy by provisioning multiple ISL’s connected to different switches to provide multiple paths through the fabric. <==========> # show fabric isl rbridge-‐id X # show fabric isl rbridge-id 111 Rbridge-id: 111 #ISLs: 3 ß Src Src Nbr Nbr Index Interface Index Interface Nbr-WWN BW Trunk Nbr-Name ---------------------------------------------------------------------------------------------- 110 Te 111/0/47 14 Te 126/1/15 10:00:00:05:33:14:47:80 10G Yes "VDX6740_066_126" 112 Fo 111/0/49 112 Fo 112/0/49 10:00:50:EB:1A:20:D3:81 40G Yes "VDX6740_066_112" 113 Fo 111/0/50 115 Fo 118/0/52 10:00:50:EB:1A:05:81:F0 40G Yes "VDX6740T_066_118" <==========>
3. Perform multiple iterations of sequential ISL toggles across the fabric. Result Validation: 1. Check VCS fabric status after ISL toggle. Verify all nodes are online
<==========> # show vcs detail # show vcs detail Config Mode : Distributed VCS Mode : Logical Chassis VCS ID : 2 VCS GUID : 47d42728-a8d7-4358-b7bc-6ff839e5a933 Total Number of Nodes : 8 ß Nodes Disconnected from Cluster : 0 ß Cluster Condition : Good ß Cluster Status : All Nodes Present in the Cluster ß ……………………….. <==========>
2. Check the switch logs for any errors and verify I/O failed over to alternate ISL path in the fabric. <==========> # show logging raslog # show interface stats detail interface tengigabitethernet/FortyGigabitEthernet X # show interface stats detail interface TenGigabitEthernet 111/0/47 Interface TenGigabitEthernet 111/0/47 statistics (ifindex 477145563188) RX TX
Solutions Lab Validation Test 60
à Packets 0 855 à Bytes 0 202283 ……………………………. à Mbits/Sec 0.000000 0.001256 à Packet/Sec 0 2 à Line-rate 0.00% 0.00% <==========>
3. Check host and storage error logs. Validate link states on the array and IP connectivity between the array and hosts.
Result 1. PASS. I/O re-‐routes to available paths in the VCS fabric and recovers when the link is restored. All
validations checks passed.
1.3.6 Storage Device Recovery – ISL Port Toggle (Entire Switch) Test Objective 1. Sequentially, and for all switches, disable all ISLs on the switch under test. 2. Verify fabric switch path redundancy between hosts & storage devices. 3. Verify switch can merge back in to the fabric. 4. Verify host I/O recovers after switch merges back in the fabric. Procedure Test Execution: 1. Setup host multipath with links on different switches in the VCS fabric and start I/O. 2. Ensure ISL redundancy by provisioning multiple ISL’s connected to different switches to provide
multiple paths through the fabric. <==========> # show fabric isl rbridge-‐id X # show fabric isl rbridge-id 111 Rbridge-id: 111 #ISLs: 3 ß Src Src Nbr Nbr Index Interface Index Interface Nbr-WWN BW Trunk Nbr-Name ---------------------------------------------------------------------------------------------- 110 Te 111/0/47 14 Te 126/1/15 10:00:00:05:33:14:47:80 10G Yes "VDX6740_066_126" 112 Fo 111/0/49 112 Fo 112/0/49 10:00:50:EB:1A:20:D3:81 40G Yes "VDX6740_066_112" 113 Fo 111/0/50 115 Fo 118/0/52 10:00:50:EB:1A:05:81:F0 40G Yes "VDX6740T_066_118" <==========>
3. Perform multiple iterations of sequentially disabling all ISLs on a switch in the fabric. Result Validation:
Solutions Lab Validation Test 61
1. Check VCS fabric status after ISL toggle. Verify all nodes are online and have merged back in the fabric. <==========> # show vcs detail # show vcs detail Config Mode : Distributed VCS Mode : Logical Chassis VCS ID : 2 VCS GUID : 47d42728-a8d7-4358-b7bc-6ff839e5a933 Total Number of Nodes : 8 ß Nodes Disconnected from Cluster : 0 ß Cluster Condition : Good ß Cluster Status : All Nodes Present in the Cluster ß ……….. <==========>
2. Check the switch logs for any errors and verify I/O resumes after the segmented switch node merges back in the fabric. <==========> # show logging raslog # show interface stats detail interface tengigabitethernet/FortyGigabitEthernet X # show interface stats detail interface TenGigabitEthernet 111/0/47 Interface TenGigabitEthernet 111/0/47 statistics (ifindex 477145563188) RX TX à Packets 0 855 à Bytes 0 202283 ……………………………. à Mbits/Sec 0.000000 0.001256 à Packet/Sec 0 2 à Line-rate 0.00% 0.00% <==========>
3. Check host and storage error logs. Validate link states on the array and IP connectivity between the array and hosts.
Result 1. PASS. I/O recovered successfully once the switch merged back in the fabric. All validation checks
passed. 2. When configuring the storage network with a LACP vLAG, sufficient ISL’s should be configured in the
VCS fabric between multiple switches to prevent a split-‐brain condition caused by a segmented switch.
1.3.7 Storage Device Recovery – 8770 Director Blade Maintenance Test Objective 1. Validate path recovery and IO integrity during Director blade maintenance.
Solutions Lab Validation Test 62
Procedure Test Execution: 1. Uplink edge switch ISLs to different blades on the directors. 2. Perform maintenance actions on Director blades; include blade disable/enable, blade power on/off, and physical removal/insertion. Syntax example: < =========== > # show linecard Slot Type Description ID Status ---------------------------------------------------------------------- L1 LC48X10G 48-port 10GE card 114 ENABLED L2 VACANT L3 VACANT L4 LC27X40G 27-port 40GE card 150 ENABLED # slot L1 disable Linecard 1 is disabled VDX6740_066_126# show linecard Slot Type Description ID Status ---------------------------------------------------------------------- L1 LC48X10G 48-port 10GE card 114 ENABLED (Interfaces Disabled) L2 VACANT L3 VACANT L4 LC27X40G 27-port 40GE card 150 ENABLED # slot L1 enable Linecard 1 is enabled # show linecard Slot Type Description ID Status ---------------------------------------------------------------------- L1 LC48X10G 48-port 10GE card 114 ENABLED L2 VACANT L3 VACANT L4 LC27X40G 27-port 40GE card 150 ENABLED # power-off linecard 1 Linecard 1 is being powered-off # show linecard Slot Type Description ID Status ---------------------------------------------------------------------- L1 LC48X10G 48-port 10GE card 114 POWERED-OFF L2 VACANT L3 VACANT L4 LC27X40G 27-port 40GE card 150 ENABLED # power-on linecard 1
Solutions Lab Validation Test 63
Linecard 1 is being powered-on # show linecard Slot Type Description ID Status ---------------------------------------------------------------------- L1 LC48X10G 48-port 10GE card 114 ENABLED L2 VACANT L3 VACANT L4 LC27X40G 27-port 40GE card 150 ENABLED < =========== > Result Validation: 1. Check VCS fabric status after blade maintenance action. Verify all nodes are online and are merged in
the fabric. <==========> # show vcs detail # show vcs detail Config Mode : Distributed VCS Mode : Logical Chassis VCS ID : 2 VCS GUID : 47d42728-a8d7-4358-b7bc-6ff839e5a933 Total Number of Nodes : 8 ß Nodes Disconnected from Cluster : 0 ß Cluster Condition : Good ß Cluster Status : All Nodes Present in the Cluster ß …………………. <==========>
2. Check the switch logs for any errors and verify I/O resumes after the blade maintenance task is completed. <==========> # show logging raslog # show interface stats detail interface tengigabitethernet/FortyGigabitEthernet X # show interface stats detail interface TenGigabitEthernet 111/0/47 Interface TenGigabitEthernet 111/0/47 statistics (ifindex 477145563188) RX TX à Packets 0 855 à Bytes 0 202283 ……………………………. à Mbits/Sec 0.000000 0.001256 à Packet/Sec 0 2 à Line-rate 0.00% 0.00% <==========>
3. Check host and storage error logs. Validate link states on the array and IP connectivity between the array and hosts.
Solutions Lab Validation Test 64
Result 1. PASS. I/O failed over to alternate path and recovered once the blade maintenance task was
completed. All validation checks passed.
1.3.8 Storage Device Recovery – Switch Offline Test Objective 1. Toggle each switch in sequential order. 2. Include switch enable/disable, power on/off, and reboot testing. Procedure Test Execution: 1. Setup host multipath with links on different switches in the VCS fabric and start I/O. 2. Perform multiple iterations of sequential disable/enable, power on/off and reboot of all the switches
in the fabric. Result Validation: 2. Check VCS fabric status after switch toggle. Verify all nodes are online and have merged back in the
fabric. <==========> # show vcs detail # show vcs detail Config Mode : Distributed VCS Mode : Logical Chassis VCS ID : 2 VCS GUID : 47d42728-a8d7-4358-b7bc-6ff839e5a933 Total Number of Nodes : 8 ß Nodes Disconnected from Cluster : 0 ß Cluster Condition : Good ß Cluster Status : All Nodes Present in the Cluster ß …………………. <==========>
3. Check the switch logs for any errors and verify I/O resumes after the segmented switch node merges back in the fabric. <==========> # show logging raslog # show interface stats detail interface tengigabitethernet/FortyGigabitEthernet X # show interface stats detail interface TenGigabitEthernet 111/0/47 Interface TenGigabitEthernet 111/0/47 statistics (ifindex 477145563188) RX TX à Packets 0 855 à Bytes 0 202283 ……………………………. à Mbits/Sec 0.000000 0.001256
Solutions Lab Validation Test 65
à Packet/Sec 0 2 à Line-rate 0.00% 0.00% <==========>
4. Check host and storage error logs. Validate link states on the array and IP connectivity between the array and hosts.
Result 2. PASS. I/O failed over to alternate path and recovered once the switch merged back in the fabric. All
validation checks passed.
1.3.9 Storage Device Recovery – Switch Firmware Download HCL (Where Applicable) Test Objective 1. Sequentially perform firmware maintenance procedure on all device connected switches under test. 2. Verify Host I/O will continue (with minimal disruption) through the “firmware download” and device
pathing will remain consistent. Procedure Test Execution: 1. Setup host multipath with links on different switches in the VCS fabric and start I/O. 2. Sequentially perform firmware upgrades on all switches in the fabric. Result Validation: 1. Verify firmware upgrade completes successfully on each switch node and they merge back in the VCS
fabric. <==========> # show version # show version Network Operating System Software Network Operating System Version: 4.1.3 Copyright (c) 1995-2014 Brocade Communications Systems, Inc. Firmware name: 4.1.3 ß ……………….. Slot Name Primary/Secondary Versions Status --------------------------------------------------------------------------- SW/0 NOS 4.1.3 ACTIVE* 4.1.3 SW/1 NOS 4.1.3 STANDBY 4.1.3 # show vcs detail # show vcs detail Config Mode : Distributed VCS Mode : Logical Chassis VCS ID : 2 VCS GUID : 47d42728-a8d7-4358-b7bc-6ff839e5a933 Total Number of Nodes : 8 ß
Solutions Lab Validation Test 66
Nodes Disconnected from Cluster : 0 ß Cluster Condition : Good ß Cluster Status : All Nodes Present in the Cluster ß …………………. <==========>
2. Check I/O generator tool logs to verify I/O runs without errors throughout the firmware upgrade. 3. Check the switch logs for any errors and verify I/O resumes on the node after the firmware upgrade
is complete. <==========> # show logging raslog # show interface stats detail interface tengigabitethernet/FortyGigabitEthernet X # show interface stats detail interface TenGigabitEthernet 111/0/47 Interface TenGigabitEthernet 111/0/47 statistics (ifindex 477145563188) RX TX à Packets 0 855 à Bytes 0 202283 ……………………………. à Mbits/Sec 0.000000 0.001256 à Packet/Sec 0 2 à Line-rate 0.00% 0.00% <==========>
Result 1. PASS. I/O operations completed without any errors. I/O failed over to alternate path during the
switch reload after firmware upgrade and resumed after the switch was online. All validation checks passed.
1.4 Optional/Additional Tests
1.4.1 Storage device firmware update Test Objective 1. Perform firmware maintenance procedure on the storage device. 2. Verify Host I/O will continue (with minimal disruption) through the “firmware download” and device
pathing will remain consistent. Procedure Test Execution: 1. Setup host multipath with links on different switches in the VCS fabric and start I/O. 2. Perform firmware update on each controller of the storage array. Result Validation: 1. Check the I/O generator tools logs to verify I/O completes without any errors. 2. Check the host and storage logs for any errors throughout the I/O operations. 3. Check the switch error logs and port stats for any errors or I/O drops.
Solutions Lab Validation Test 67
<==========> # show logging raslog # show interface stats detail interface tengigabitethernet X # show interface stats detail interface TenGigabitEthernet 111/0/30 Interface TenGigabitEthernet 111/0/30 statistics (ifindex 477145006109) RX TX Packets 0 855 Bytes 0 202283 Unicasts 0 0 Multicasts 0 119 Broadcasts 0 736 à Errors 0 0 à Discards 0 0 à Overruns 0 Underruns 0 à Runts 0 à Jabbers 0 à CRC 0 <==========>
Result 1. PASS. I/O completed successfully during the update and failed over to the standby controller during
the controller reboot.
1.4.2 Workload Simulation on Hyper-‐‑V and RHEV with Medusa Test Objective 1. Validate Storage/Fabric behavior while running a workload simulation test suite. 2. Areas of focus may include random and sequential data patterns of various block sizes and database
simulation. Procedure Test Execution: 1. Setup Hyper-‐V and RHEV cluster as described in the host configuration settings. 2. Use Medusa I/O tool for generating I/O and simulating workloads. 3. Run random and sequential I/O in a loop at block transfer sizes of 4k, 8k, 16k, 32k, 64k, 128k, 256k,
512k, and 1m. Include a nested loop of 100% read, 100% write, and 50% read/write. 4. Run Medusa Application I/O workload suite which includes OLTP, Decision Support System (DSS),
Exchange Email, File Servers, Media Streaming, OS Drive, OS Paging, SQL, Video on Demand, VDI and Web Server profiles.
Result Validation: 4. Check the I/O generator tools logs to verify I/O completes without any errors. 5. Check the host and storage logs for any errors throughout the I/O operations. 6. Check the switch error logs and port stats for any errors or I/O drops.
<==========>
Solutions Lab Validation Test 68
# show logging raslog # show interface stats detail interface tengigabitethernet X # show interface stats detail interface TenGigabitEthernet 111/0/30 Interface TenGigabitEthernet 111/0/30 statistics (ifindex 477145006109) RX TX Packets 0 855 Bytes 0 202283 Unicasts 0 0 Multicasts 0 119 Broadcasts 0 736 à Errors 0 0 à Discards 0 0 à Overruns 0 Underruns 0 à Runts 0 à Jabbers 0 à CRC 0 <==========>
Result 1. PASS. All workload runs were monitored at the host, storage and fabric and verified they completed without any I/O errors or faults.
1.4.3 Workload Simulation on VMware with VMware IOAnalyzer and Medusa Test Objective 1. Validate Storage/Fabric behavior while running a virtual workload simulation test suite. 2. Areas of focus include VM environments running de-‐duplication/compression data patterns, and
database simulation. Procedure Test Execution: 1. Setup an ESX cluster of 4 hosts with 4 VMs per host; 2 VMs running VMware IOAnalyzer tool and 2
running Medusa I/O generator tool. 2. Use VMware IOAnalyzer tool for generating I/O and simulating workloads.
-‐ Run random and sequential IO at large and small block transfer sizes. -‐ Run SQL Server simulation workload -‐ Run OLTP simulation workload -‐ Run Web Server simulation workload -‐ Run Video on Demand simulation workload -‐ Run Workstation simulation workload -‐ Run Exchange server simulation workload
3. Use Medusa I/O tool to generate random and sequential I/O in a loop at block transfer sizes of 4k, 8k, 16k, 32k, 64k, 128k, 256k, 512k, and 1m. Include a nested loop of 100% read, 100% write, and 50% read/write.
Solutions Lab Validation Test 69
4. Run Medusa Application I/O workload suite which includes OLTP, Decision Support System (DSS), Exchange Email, File Servers, Media Streaming, OS Drive, OS Paging, SQL, Video on Demand, VDI and Web Server profiles.
Result Validation: 1. Check the I/O generator tools logs to verify I/O completes without any errors. 2. Check the host and storage logs for any errors throughout the I/O operations. 3. Check the switch error logs and port stats for any errors or I/O drops.
<==========> # show logging raslog # show interface stats detail interface tengigabitethernet X # show interface stats detail interface TenGigabitEthernet 111/0/30 Interface TenGigabitEthernet 111/0/30 statistics (ifindex 477145006109) RX TX Packets 0 855 Bytes 0 202283 Unicasts 0 0 Multicasts 0 119 Broadcasts 0 736 à Errors 0 0 à Discards 0 0 à Overruns 0 Underruns 0 à Runts 0 à Jabbers 0 à CRC 0 <==========>
Result 1. PASS. All workload runs were monitored at the host, storage and fabric and verified they completed without any I/O errors or faults.
Test Conclusions 1. Achieved 100% pass rate on all the test cases in the SSR qualification test plan. The network and the
storage were able to handle the various stress and error recovery scenarios without any issues. 2. Different I/O workload scenarios were simulated on VMware, Hyper-‐V and RHEV environments using
Medusa IO and VMware IOAnalyzer I/O generator tool. Sustained performance levels were achieved across all workload types. The Tintri T620 array and Brocade VCS fabric handled both the low latency and high throughput I/O workloads with equal efficiency without any I/O errors or packet drops.
3. The results confirm that the Tintri VMstore T620 array interoperates seamlessly with Brocade VCS fabric, and demonstrated high availability and sustained performance.
4. Setting up the array uplinks in a LACP vLAG achieved higher throughputs, as compared to the active/slave settings, while providing fault tolerance and high availability.
Solutions Lab Validation Test 70
5. Sufficient ISL’s should be provisioned for each switch in the VCS fabric to prevent bottlenecks and provide redundancy and high availability.
6. For optimal availability and performance at the hosts, virtualization hosts should be setup with multiple uplinks in a “team” connecting to separate switches in the VCS fabric.
7. Brocade Network OS (NOS) version 5.0.1 introduces support for configuration of enhanced ingress and egress queue depths. While our testing showed support for max throughput and low latency at default settings, customers may wish to increase the buffer values from default settings to a higher recommended value of 2MB (settings up to 8MB are supported but not recommended). The higher buffer setting provides an additional layer of network traffic resiliency, for example in a bursty TCP incast traffic scenario.
8. Brocade Network OS (NOS) version 6.0.1 introduces support for Monitoring and Alerting Policy Suite (MAPS) that enables each switch and storage port to be constantly monitored for potential faults and generate alerts.
9. The Auto Migrating Port Profile (AMPP) feature and the vCenter integration reduces the configuration complexity in deploying a L2 network in a virtualized environment. The Brocade fabric will automatically create port profiles, bind them to the VM MAC addresses and auto migrate the profiles as the VM’s migrate between hosts. In a Hyper-‐V and RHEV virtualized environment, the port profiles need to be manually created and associated with the VM MAC addresses.
Top Related