NetApp FlexPod Creation

108
FlexPod Hands-on Lab DC-3-493 Abstract Gain hands-on experience with the unique differentiators of the new FlexPod architecture featuring NetApp DataONTAP operating in Cluster-Mode. This includes rapid deployment, simplified scaling and administration, and providing an uncompromised end user experience through non-disruptive operations. Students will use both the individual component management tools and a comprehensive validated management tool to quickly deploy new blade servers and virtual desktops and to seamlessly rebalance the resulting load. Key Takeaways 1. Experience simplified administration and management using NetApp OnCommand and Cisco UCS Manager 2. Learn about rapid deployment and simplified scaling using Cloupia Unified Infrastructure Controller 3. Provide an uncompromised end user experience through non-disruptive operations FlexPod Hands-on Lab Page 1 of 108

description

NetApp FlexPod Creation

Transcript of NetApp FlexPod Creation

Page 1: NetApp FlexPod Creation

FlexPod Hands-on LabDC-3-493

AbstractGain hands-on experience with the unique differentiators of the new FlexPod architecture featuring NetAppDataONTAP operating in Cluster-Mode. This includes rapid deployment, simplified scaling and administration,and providing an uncompromised end user experience through non-disruptive operations. Students will useboth the individual component management tools and a comprehensive validated management tool to quicklydeploy new blade servers and virtual desktops and to seamlessly rebalance the resulting load.

Key Takeaways1. Experience simplified administration and management using NetApp OnCommand and Cisco UCS

Manager

2. Learn about rapid deployment and simplified scaling using Cloupia Unified Infrastructure Controller

3. Provide an uncompromised end user experience through non-disruptive operations

FlexPod Hands-on Lab Page 1 of 108

Page 2: NetApp FlexPod Creation

1. Table of Contents1. Table of Contents2. Introduction

2.1. Document Notation Key2.2. Lab Topology

3. Simplified Administration3.1. Create space-efficient clones using NetApp Virtual Storage Console3.2. Verify storage efficiency using OnCommand System Manager3.3. Check on the clones3.4. Create service profile template

4. Rapid Deployment, Simplified Scaling4.1. Expand compute cluster

4.1.1. Create a workflow in CUIC4.1.2. Execute a workflow in CUIC

4.2. Expand storage cluster4.2.1. Add a new storage controller to the cluster4.2.2. Rename the root aggregate4.2.3. Create a data aggregate

5. Non-Disruptive Operations5.1. Use vMotion to balance compute workload5.2. Use Data Motion to balance storage workload5.3. An Immortal Storage Infrastructure

5.3.1. Investigate Data Path5.3.2. Create iSCSI LIF on cluster1-035.3.3. Establish iSCSI session using SnapDrive5.3.4. Investigate Data Path

6. Conclusion7. Appendix

7.1. Modeled Lab Topology Diagram7.2. Extra Credit7.3. Turbo-Mode7.4. List of differences/modifications made for lab environment

FlexPod Hands-on Lab Page 2 of 108

Page 3: NetApp FlexPod Creation

2. IntroductionIn this lab, you will take on the role of a FlexPod administrator and will experience first-hand the unique valueof FlexPod. First, you will experience Simplified Administration using the individual element managers toquickly provision new VMs to support new employees. Next, you will experience Rapid Deployment andSimplified Scaling using a validated FlexPod management solution to quickly provision a new ESXi host. Youwill also use the DataONTAP command line interface to easily add a new storage controller to the storagecluster. Finally, you will experience Non-Disruptive Operations by rebalancing an active workload across theexpanded infrastructure with zero downtime.

We have attempted to model a real FlexPod as closely as possible using simulators; however there are someareas where we were forced to perform actions slightly different from the way you would in real life. We havemade notes in the lab guide at each of these points describing how the simulated environment is different andwhat you would normally do.

If you are only interested in specific portions of the lab, we have written a "Turbo Mode" for each section ofthe lab. Turbo mode will allow you to skip over a section of the lab as quickly as possible withoutcompromising any of the subsequent lab exercises.

Finally, we have included several appendices with a diagram of the modeled lab topology, areas for furtherexploration, a list of all the differences resulting from the use of simulators, and a list of all the "turbo mode"sections if you are pressed for time.

2.1. Document Notation Key

Throughout this document, we use a variety of notation to separate different types of content. Please see the keybelow for help understanding the different content sections.

"Wow!" moments: These sections point out the "wow!" moments and emphasize the keytakeaways for the lab.

Scenario/Context/"Why?": These sections explain the context in which you are performing the labexercise. They should help you to understand why the lab exercise is necessary.

Warning/Important: These sections call out important information regarding the current the labexercise. These notes may relate to either the specifics of the lab exercise, or they may emphasize

concepts you should be aware of when working with a real FlexPod.

FlexPod Hands-on Lab Page 3 of 108

Page 4: NetApp FlexPod Creation

2.2. Lab Topology

"In real life..."/Lab Differences/"Unreal(istic)...": These sections point out where the lab differsfrom real life. This could be a result of our use of simulators or specific changes made to hide

"uninteresting" bits of required process.

Turbo-Mode: These sections list all the steps necessary to quickly skip over a section of the lab withno additional commentary. Use these steps if you are only interested in a later portion of the lab.

FYI/Alternative methods: These sections list alternative methods, interesting tidbits, and areas forfurther exploration.

Our simulated lab environment is noticeably different from a real-world FlexPod. Please seeAppendix 7.1 for a diagram of the real-world environment we are attempting to model.

FlexPod Hands-on Lab Page 4 of 108

Page 5: NetApp FlexPod Creation

3. Simplified Administration

3.1. Create space-efficient clones using NetApp Virtual StorageConsole

1. Open vSphere Client, and login using Windows session credentials.

Your company has just hired several new employees, and you need to deploy new virtual desktopsfor them. You have a pre-built template, and all virtual desktops will be identical.

Create 2 clones of iometer-gold using VSC. Power them on and wait for them to finish booting.Boot is complete when Iometer is running. This may take up to 10 minutes.

FlexPod Hands-on Lab Page 5 of 108

Page 6: NetApp FlexPod Creation

2. Go to the VMs and Templates view by clicking on Home and then VMs and Templates (top of thescreen, under Inventory)

3. Right click on the iometer-gold template and click NetApp → Provisioning and Cloning → Createrapid clones.

FlexPod Hands-on Lab Page 6 of 108

Page 7: NetApp FlexPod Creation

4. Select Target Storage Controller: cluster1 and Vserver: vserver1.

If you get a "No storage controllers" error, restart the Virtual Storage Console service or justreboot the entire vCenter VM (which you are currently logged in to). Alternatively, just ask a

proctor to help you. This is a side-effect of cloning lab environments and only shows up on the firstboot after cloning.

FlexPod Hands-on Lab Page 7 of 108

Page 8: NetApp FlexPod Creation

5. Select Cluster1 to put the cloned VMs on ESXi hosts inside the cluster.

Choosing the cluster (instead of a single host) will result in VSC balancing the cloned VMsacross all hosts in the cluster.

FlexPod Hands-on Lab Page 8 of 108

Page 9: NetApp FlexPod Creation

6. Select Same format as source to use the same disk format for the clones as the template. Since we areusing NFS, this is normally Thin provisioned.

FlexPod Hands-on Lab Page 9 of 108

Page 10: NetApp FlexPod Creation

7. Create 2 clones with a base name of iometer and power them on automatically.

FlexPod Hands-on Lab Page 10 of 108

Page 11: NetApp FlexPod Creation

8. Select vm_datastore as the target for the cloned VMs.

If you select a different datastore than the current location of the template, you will have towait for a slow copy of the template to the new datastore before rapid cloning can take place.

For maximum efficiency, clones should always be on the same datastore as the template.

FlexPod Hands-on Lab Page 11 of 108

Page 12: NetApp FlexPod Creation

9. Review the summary and click Apply to begin cloning.

3.2. Verify storage efficiency using OnCommand System Manager

NetApp's cloning technology makes it fast and easy to deploy new virtual machines from a pre-builttemplate. Even on resource-constrained simulators, the storage cloning for full-size VMs is finished

in seconds; the slow part is waiting for them to boot.

Alternative Methods: (details in Appendix: Extra Credit)

Cloupia Unified Infrastructure ControllerCreate a workflow with the Create VMs using VSC task

FlexPod Hands-on Lab Page 12 of 108

Page 13: NetApp FlexPod Creation

1. Click on the NetApp blue arch on the taskbar to launch OnCommand System Manager.

2. Select cluster1, and click Login.

3. Login as user admin with password netapp1.

While the new VMs are booting, let's go look at OnCommand System Manager to see the effect ofusing Virtual Storage Console for cloning.

Educational only. You may skip to the next section without affecting the remainder of the labexercises.

FlexPod Hands-on Lab Page 13 of 108

Page 14: NetApp FlexPod Creation

4. Click Vservers on the lower left of the screen.

FlexPod Hands-on Lab Page 14 of 108

Page 15: NetApp FlexPod Creation

5. Navigate to cluster1 → vserver1 → Storage → Volumes, and click on vm_datastore. Click on theStorage Efficiency tab at the bottom of the screen.

6. You can see here that we are using 80% less space as a result of deduplication and space-efficientcloning!

Deduplication and space-efficient cloning allow you to run more virtual machines on less storage!

FlexPod Hands-on Lab Page 15 of 108

Page 16: NetApp FlexPod Creation

3.3. Check on the clones

Alternative Methods: (details in Appendix: Extra Credit)

Data ONTAP Command LinevSphere Client and NetApp Virtual Storage Console

At this point, the cloned virtual machines will likely have finished booting. You can verify this byreturning to the vSphere Client and opening each virtual machine's console.

Because we are using simulators, the cloned VMs may take up to 10 minutes to finish booting andlaunch Iometer. If they are still not working after 10 minutes, please contact a proctor for help. The

simplest solution to this problem is to just power off the VMs, delete them from disk, and create freshclones from the template.

Educational only. You may skip to the next section without affecting the remainder of the labexercises.

FlexPod Hands-on Lab Page 16 of 108

Page 17: NetApp FlexPod Creation

1. Right click on a virtual machine and click Open Console.

FlexPod Hands-on Lab Page 17 of 108

Page 18: NetApp FlexPod Creation

2. Iometer should have automatically launched within the virtual machine. Click on the Results Displaytab to view the performance results.

You should be seeing a few hundred IOPS transferring about 1 MB/sec. If Iometer failed to start, oryou are seeing less than 100 IOPS, please ask a proctor for help; the load generated by Iometer will

be important later in the lab.

FlexPod Hands-on Lab Page 18 of 108

Page 19: NetApp FlexPod Creation

3.4. Create service profile template

The VMs you cloned out login automatically and execute a script that performs a few "uninteresting"operations that are important for the remainder of the lab. First, it establishes iSCSI sessions with

cluster1-01 and cluster1-02 using SnapDrive for Windows. Next, it uses SnapDrive to create andmount a 100MB LUN located on the iometer volume in vserver2. The iometer volume resides onaggr1_cluster1_02. Finally, it launches Iometer to generate IO traffic and detect any connectivity errorsbetween the VM and its LUN. For details, please reference the diagram below or ask a proctor to clarify:

FlexPod Hands-on Lab Page 19 of 108

Page 20: NetApp FlexPod Creation

1. Open UCS Manager from the desktop and login as user admin with password netapp1.

The service profile template would normally have been created to deploy the first ESXi host, but wewill perform it now to familiarize you with the UCS Manager interface. In addition, we will only be

performing a subset of the tasks in the FlexPod deployment guide.

The purpose of the following activities is to familiarize you with UCS Manager, not replicate theFlexPod deployment guide. Do not consider this a comprehensive list of the tasks required to build an

ESXi Service Profile.

Educational only. You may skip to the next section without affecting the remainder of the labexercises.

If you are unable to login to UCS Manager (timeout/read error), open Firefox, and go tohttp://192.168.0.182/. Click Restart → Restart UCSPE. Select Yes, and click Restart UCS

Emulator with Current Settings. Wait a few minutes for it to restart, and then try to loginagain. This is unfortunately a known bug in the UCSM Simulator.

FlexPod Hands-on Lab Page 20 of 108

Page 21: NetApp FlexPod Creation

2. Click on the LAN tab and navigate to LAN → Policies → root → vNIC Templates. Click the green plusto add a new vNIC Template.

FlexPod Hands-on Lab Page 21 of 108

Page 22: NetApp FlexPod Creation

3. Enter the following information for the new vNIC Template:Name: vNIC_AFabric ID: Fabric ATarget: AdapterTemplate Type: Updating Template

FlexPod Hands-on Lab Page 22 of 108

Page 23: NetApp FlexPod Creation

4. Click Create VLAN to add a new VLAN, and enter the following information:Name: NFS_VLANFabric: Common/GlobalVLAN ID: 3001

FlexPod Hands-on Lab Page 23 of 108

Page 24: NetApp FlexPod Creation

5. Click Create VLAN to add a new VLAN, and enter the following information:Name: MGMT_VLANFabric: Common/GlobalVLAN ID: 3002

FlexPod Hands-on Lab Page 24 of 108

Page 25: NetApp FlexPod Creation

6. Finish creating the vNIC Template by entering the following:Selected VLANS: default, NFS_VLAN, MGMT_VLANNative VLAN: defaultMTU: 9000MAC Pool: mac-pool-1

FlexPod Hands-on Lab Page 25 of 108

Page 26: NetApp FlexPod Creation

7. Repeat this process to create vNIC Template, vNIC_B.

FlexPod Hands-on Lab Page 26 of 108

Page 27: NetApp FlexPod Creation

8. Click on the Servers tab and navigate to Servers → Pools → root → UUID Suffix Pools. Click thegreen plus to add a new UUID Suffix Pool.

FlexPod Hands-on Lab Page 27 of 108

Page 28: NetApp FlexPod Creation

9. Name the pool uuid-pool-1, and click Next.

10. Click Add to add a UUID Suffix Block.

FlexPod Hands-on Lab Page 28 of 108

Page 29: NetApp FlexPod Creation

11. Enter a size of 32 for the block, and click OK.

12. Click Finish.

FlexPod Hands-on Lab Page 29 of 108

Page 30: NetApp FlexPod Creation

13. Navigate to Servers → Service Profile Templates, and click the green plus to add a new ServiceProfile Template.

FlexPod Hands-on Lab Page 30 of 108

Page 31: NetApp FlexPod Creation

14. Enter the following information for the new Service Profile Template:Name: ESXiType: Updating TemplateUUID Assignment: uuid-pool-1

FlexPod Hands-on Lab Page 31 of 108

Page 32: NetApp FlexPod Creation

15. Select the default local storage policy, and No vHBAs. Click Next.

We would like to specifically call out the differences here from real life: in FlexPod, westrongly recommend SAN booting your UCS servers to enable free mobility of service

profiles. We have only removed it here for simplicity.

FlexPod Hands-on Lab Page 32 of 108

Page 33: NetApp FlexPod Creation

16. Select Expert mode, and click Add to add a vNIC to the Service Profile Template.

FlexPod Hands-on Lab Page 33 of 108

Page 34: NetApp FlexPod Creation

17. Enter the following information:Name: vNIC_AUse LAN Connectivity Template: YesvNIC Template: vNIC_AAdapter Policy: VMware

FlexPod Hands-on Lab Page 34 of 108

Page 35: NetApp FlexPod Creation

18. Repeat the process to add vNIC_B with the following information:Name: vNIC_BUse LAN Connectivity Template: YesvNIC Template: vNIC_BAdapter Policy: VMware

FlexPod Hands-on Lab Page 35 of 108

Page 36: NetApp FlexPod Creation

19. Let the system perform placement of adapters. Click Next

FlexPod Hands-on Lab Page 36 of 108

Page 37: NetApp FlexPod Creation

20. Select the default boot policy, and click Next.

FlexPod Hands-on Lab Page 37 of 108

Page 38: NetApp FlexPod Creation

21. We aren't using a maintenance policy. Click Next.

FlexPod Hands-on Lab Page 38 of 108

Page 39: NetApp FlexPod Creation

22. Select server pool blade-pool-2, and click Next.

FlexPod Hands-on Lab Page 39 of 108

Page 40: NetApp FlexPod Creation

23. We aren't using any operational policies. Click Finish.

Alternative Methods: (details in Appendix: Extra Credit)

Cloupia Unified Infrastructure Controller

FlexPod Hands-on Lab Page 40 of 108

Page 41: NetApp FlexPod Creation

4. Rapid Deployment, Simplified Scaling4.1. Expand compute cluster

4.1.1. Create a workflow in CUIC

Congratulations! You have successfully filled your ESXi host with business critical virtual machines.However, your company is still growing, and you need more compute capacity!

The following section will walk you through deploying an additional ESXi host using CloupiaUnified Infrastructure Controller (CUIC). While this process could be performed manually, a

FlexPod management solution provides unified, turn-key automation and orchestration capabilities for theentire FlexPod infrastructure.

In vCenter, add new ESXi host, 192.168.0.52, with credentials root:netapp1. Right-click on thenew host and navigate to NetApp → Provisioning and Cloning → Mount datastores. Use the

wizard to mount vm_datastore on 192.168.0.52.

In this lab, we don't want to wait for ESXi to be installed on a storage simulator (it takes a long time),so we have pre-installed ESXi and will be adding the pre-provisioned host to the cluster. CUIC is

fully capable of performing the end-to-end orchestration required to deploy a new ESXi host.

FlexPod Hands-on Lab Page 41 of 108

Page 42: NetApp FlexPod Creation

1. Open CUIC from the Start Menu

FlexPod Hands-on Lab Page 42 of 108

Page 43: NetApp FlexPod Creation

2. Login to CUIC as user admin with password netapp1.

3. Navigate to Policies → Orchestration, and click Add Workflow.

FlexPod Hands-on Lab Page 43 of 108

Page 44: NetApp FlexPod Creation

4. Name your workflow Expand Compute Cluster, and place it in a new folder named FlexPod. ClickNext.

FlexPod Hands-on Lab Page 44 of 108

Page 45: NetApp FlexPod Creation

5. Click to add a new input field:Input Label: ESXi HostnameInput Type: Generic Text Input

FlexPod Hands-on Lab Page 45 of 108

Page 46: NetApp FlexPod Creation

6. Select the newly created workflow, and open the Workflow Designer.

7. Use the search box in the top left to search for the Register Host with vCenter task. Drag the taskinto the workflow designer.

FlexPod Hands-on Lab Page 46 of 108

Page 47: NetApp FlexPod Creation

8. Click Next.

FlexPod Hands-on Lab Page 47 of 108

Page 48: NetApp FlexPod Creation

9. Map the Host Node attribute to the ESXi Hostname user input, and click Next.

FlexPod Hands-on Lab Page 48 of 108

Page 49: NetApp FlexPod Creation

10. Enter the credentials for the new ESXi host:User ID: rootPassword: netapp1

11. Use the search box to find the Associate Cluster Volume as NFS Datastore task. Drag the taskonto the workflow designer.

FlexPod Hands-on Lab Page 49 of 108

Page 50: NetApp FlexPod Creation

12. Click Next.

FlexPod Hands-on Lab Page 50 of 108

Page 51: NetApp FlexPod Creation

13. Map the Hostnode attribute to the HOST_NAME output of the RegisterHostwithvCenter task. ClickNext.

FlexPod Hands-on Lab Page 51 of 108

Page 52: NetApp FlexPod Creation

14. Enter the mount information for the datastore:Filer Identity Name: vserver1_lif1NFS Path: /vm_datastoreDatastore Name: vm_datastore

15. To integrate your new task into the workflow, drag the on success exit point of theRegisterHostwithvCenter task onto the AssociateCluterVolumeasNFSDatastore task.

FlexPod Hands-on Lab Page 52 of 108

Page 53: NetApp FlexPod Creation

16. Connect the on failure exit point to Completed (Failed) to finish designing the workflow. ClickValidate Workflow to check your work. Click Close when you are done.

4.1.2. Execute a workflow in CUIC

Although the example workflow is rather short, a full ESXi deployment workflow would create aLUN on the storage controller, configure zoning on the switch, create a Service Profile on UCS,

associate it with a server, PXE boot the server, install ESXi, add the new host to vCenter, and mount theappropriate datastores. Despite the vast increase in complexity, the workflow is still extremely easy toexecute. The full workflow is in the System folder and named Deploy ESXi Host as shown below:

CUIC allows you to automate all your business processes with extremely powerful workflows thatcan control your entire infrastructure.

FlexPod Hands-on Lab Page 53 of 108

Page 54: NetApp FlexPod Creation

1. Right-click on the new workflow, and click Execute now.

2. Enter the IP address of the new ESXi host: 192.168.0.52.

FlexPod Hands-on Lab Page 54 of 108

Page 55: NetApp FlexPod Creation

3. Navigate to Organizations → Service Requests to view the status of the service request.

4. Click Refresh periodically until you see that the service request has completed. This should take nomore than 5 minutes.

4.2. Expand storage cluster

A FlexPod management solution like CUIC makes it extremely easy to deploy additional ESXi hostsas your company grows!

Alternative Methods: (details in Appendix: Extra Credit)

vSphere Client and NetApp Virtual Storage Console

FlexPod Hands-on Lab Page 55 of 108

Page 56: NetApp FlexPod Creation

4.2.1. Add a new storage controller to the cluster

1. Open PuTTY from the taskbar and connect to cluster1-03.

2. Login as user admin with password netapp1.

Your company has continued to grow, and you have cloned many VMs. While NetApp's storage-efficiency technology has greatly reduced the storage required for these VMs, you have finally

reached the point of needing to expand your storage infrastructure. Fortunately, DataONTAP operating inCluster-Mode makes this extremely fast and easy.

SSH into cluster1-03 and cluster1-04 using PuTTY, type join, and press Enter until clustersetup is complete.

You would normally connect a serial cable to the storage controller's console port. However,because we are using simulators, we have pre-configured the storage controller's networking

to allow access via SSH.

FlexPod Hands-on Lab Page 56 of 108

Page 57: NetApp FlexPod Creation

3. Enter join to join this node to an existing cluster.

4. Enter yes (or just press Enter) to use default settings for cluster ports.

For a real-world storage cluster, you would be using jumbo frames (MTU 9000) for thecluster ports. When using physical storage controllers, DataONTAP uses 9000 by default.

FlexPod Hands-on Lab Page 57 of 108

Page 58: NetApp FlexPod Creation

5. Enter cluster1 (or just press Enter) to join this node to cluster1.

FlexPod Hands-on Lab Page 58 of 108

Page 59: NetApp FlexPod Creation

6. Press Enter through the remainder of the prompts. The node management port was preconfigured, sothese settings default to the current values.

Because we pre-configured the network, the node management interface configuration waspre-populated during setup. Normally, you would need to enter the node management

interface settings.

FlexPod Hands-on Lab Page 59 of 108

Page 60: NetApp FlexPod Creation

7. PuTTY will disconnect automatically when cluster setup is complete.

FlexPod Hands-on Lab Page 60 of 108

Page 61: NetApp FlexPod Creation

8. Repeat the above process to add cluster1-04 to the storage cluster.

4.2.2. Rename the root aggregate

In this lab, we are adding two standalone nodes to the storage cluster. In a real-world environment,these nodes would be in a Storage Failover (SFO) pair for High Availability.

Wait, you're done already? You couldn't have already expanded your storage infrastructure! Aren'tupgrades supposed to require scheduled downtime and working all weekend?

Because you like things neat and organized in your data center, you have instituted a namingconvention for all the aggregates in your storage infrastructure. However, DataONTAP can't read

your mind, so you need to rename the root aggregate to match your convention.

DataONTAP operating in Cluster-Mode has a very powerful command line interface. However, asyou can see in this section's turbo mode, the commands can get somewhat lengthy. Fortunately, this is

not a problem due to extensive tab completion capabilities. You will leverage the command line and tabcompletion to rename the root aggregate.

SSH into cluster1 and execute: storage aggregate rename -aggregate aggr0 -newnameaggr0_cluster1_03

FlexPod Hands-on Lab Page 61 of 108

Page 62: NetApp FlexPod Creation

1. Open PuTTY from the taskbar and connect to cluster1.

2. Login as user admin with password netapp1.

FlexPod Hands-on Lab Page 62 of 108

Page 63: NetApp FlexPod Creation

3. Determine the current name of the root aggregate: storage aggregate show -nodes cluster1-03 -root true

FlexPod Hands-on Lab Page 63 of 108

Page 64: NetApp FlexPod Creation

4. Type the following (broken onto multiple lines for readability):sto<tab> to produce storageag<tab> to produce aggregater<tab>n<tab> to produce rename<tab>0 to produce -aggregate aggr0 (the current name of the root aggregate)<space><tab>0_cluster1_03 to produce -newname aggr0_cluster1_03 (the new name of theroot aggregate)<enter> to execute the command

FlexPod Hands-on Lab Page 64 of 108

Page 65: NetApp FlexPod Creation

5. Repeat the above process to rename the root aggregate on cluster1-04.

Tab completion is awesome! In the words of NetApp co-founder, Dave Hitz, "I put command-line onour feature list!" The DataONTAP command-line provides you with an easy to use single point of

management for your entire storage infrastructure.

Alternative Methods: (details in Appendix: Extra Credit)

OnCommand System ManagerCloupia Unified Infrastructure Controller

FlexPod Hands-on Lab Page 65 of 108

Page 66: NetApp FlexPod Creation

4.2.3. Create a data aggregate

1. Click on the NetApp blue arch on the taskbar to launch OnCommand System Manager.

2. Select cluster1, and click Login.

3. Login as user admin with password netapp1.

Your new storage controller is now an active member of the storage cluster, but you haven't yet toldDataONTAP how you want to use all the disks attached to it. As with aggregate naming,

DataONTAP can't read your mind, so you need to tell it how the disks should be utilized by creating dataaggregates according to your needs.

SSH into cluster1 and execute: storage aggregate create -nodes cluster1-03 -aggregate aggr1_cluster1_03 -diskcount 42

FlexPod Hands-on Lab Page 66 of 108

Page 67: NetApp FlexPod Creation

4. Navigate to cluster1 → Storage → Aggregates, and click on Create to create a new aggregate.

5. Click Next to begin the Create Aggregate Wizard.

FlexPod Hands-on Lab Page 67 of 108

Page 68: NetApp FlexPod Creation

6. Enter aggr1_cluster1_03 for the new aggregate's name. Select RAID-DP as the RAID type, and 64-bit as the block format. Click Next.

In general, you should always use RAID-DP and 64-bit for new aggregates.

FlexPod Hands-on Lab Page 68 of 108

Page 69: NetApp FlexPod Creation

7. Select the new controller, cluster1-03, and ATA disks. Click Next.

FlexPod Hands-on Lab Page 69 of 108

Page 70: NetApp FlexPod Creation

8. Click Select Disks to input the number of disks to add to the new aggregate.

FlexPod Hands-on Lab Page 70 of 108

Page 71: NetApp FlexPod Creation

9. Enter 42 disks. Click Ok.

FlexPod Hands-on Lab Page 71 of 108

Page 72: NetApp FlexPod Creation

10. Verify that you have balanced RAID groups, and click Next.

RAID group size is extremely important! For performance, you should optimize the RAIDgroup size so that all RAID groups contain the same number of drives. For detailed guidance,

please reference the Tech OnTap article Back to Basics: RAID-DP.

FlexPod Hands-on Lab Page 72 of 108

Page 73: NetApp FlexPod Creation

11. Review the summary, and click Create to create the new aggregate.

FlexPod Hands-on Lab Page 73 of 108

Page 74: NetApp FlexPod Creation

12. Wait for the aggregate to be created, and then click Finish. This will take about a minute.

We are only utilizing one of the new storage controllers in this lab. In real life, you would create dataaggregates on both controllers.

Alternative Methods: (details in Appendix: Extra Credit)

DataONTAP Command-LineCloupia Unified Infrastructure Controller

FlexPod Hands-on Lab Page 74 of 108

Page 75: NetApp FlexPod Creation

5. Non-Disruptive Operations

5.1. Use vMotion to balance compute workload

1. Return to vSphere Client.

2. Right-click on iometer2, and click Migrate.

Awesome! You have now successfully added storage and compute capacity to your FlexPod. Now,you need to balance your production applications across the new components, but you can't afford

any downtime to perform a migration. You need to non-disruptively migrate your production workloadsacross the infrastructure.

DataONTAP operating in Cluster-Mode and vSphere both support non-disruptive operations, so youcan seamlessly migrate production application workloads across the infrastructure. While we are

displaying load rebalancing in this lab, you could just as easily be migrating off of old hardware scheduledfor retirement.

Use vMotion to migrate iometer2 to the new ESXi host, 192.168.0.52.

FlexPod Hands-on Lab Page 75 of 108

Page 76: NetApp FlexPod Creation

3. Select Change host to move the VM to a different host. Click Next.

FlexPod Hands-on Lab Page 76 of 108

Page 77: NetApp FlexPod Creation

4. Select 192.168.0.52 as the destination for the vMotion. Click Next.

FlexPod Hands-on Lab Page 77 of 108

Page 78: NetApp FlexPod Creation

5. Select High priority, and click Next.

FlexPod Hands-on Lab Page 78 of 108

Page 79: NetApp FlexPod Creation

6. Review the summary, and click Finish.

7. During the migration, you can return to the VM console and see that the guest continues to operatethroughout the entire process.

vMotion enables completely non-disruptive migration of production workloads between differentphysical servers. The process is entirely transparent to the end-user.

Alternative Methods: (details in Appendix: Extra Credit)

Cloupia Unified Infrastructure Controller

FlexPod Hands-on Lab Page 79 of 108

Page 80: NetApp FlexPod Creation

5.2. Use Data Motion to balance storage workload

1. Open PuTTY from the taskbar and connect to cluster1.

SSH into cluster1 and execute: volume move start -vserver vserver2 -volume iometer -destination-aggregate aggr1_cluster1_03 -foreground true

FlexPod Hands-on Lab Page 80 of 108

Page 81: NetApp FlexPod Creation

2. Look to see where the volume currently resides by executing: volume show iometer

FlexPod Hands-on Lab Page 81 of 108

Page 82: NetApp FlexPod Creation

3. Use Data Motion to non-disruptively move the volume onto the new storage controller. Don't forget touse tab completion!

Base Command: volume move startArguments:

-vserver vserver2 (containing vserver)-volume iometer (volume to move)-destination-aggregate aggr1_cluster1_03 (data aggregate on new controller)-foreground true (we want to watch the migration process happen)

Full Command: volume move start -vserver vserver2 -volume iometer -destination-aggregate aggr1_cluster1_03 -foreground true

FlexPod Hands-on Lab Page 82 of 108

Page 83: NetApp FlexPod Creation

4. Look to see where the volume now resides by executing: volume show iometer

FlexPod Hands-on Lab Page 83 of 108

Page 84: NetApp FlexPod Creation

5. During the Data Motion operation, you can return to the VM console and see that Iometer has continualaccess to the LUN, and there are no IO errors detected.

5.3. An Immortal Storage Infrastructure

Data Motion enables completely non-disruptive migration of production data between differentphysical storage controllers. As shown by Iometer's lack of errors, this process is entirely

transparent to the application or user accessing the data.

Alternative Methods: (details in Appendix: Extra Credit)

OnCommand System ManagerCloupia Unified Infrastructure Controller

FlexPod Hands-on Lab Page 84 of 108

Page 85: NetApp FlexPod Creation

5.3.1. Investigate Data Path

Excellent! You have successfully moved your application data onto a new storage controller withoutdisruption or IO errors. Let's take a quick look at our new infrastructure topology:

"But wait," you ask, "how is the application accessing the data? The new controller doesn't have any datainterfaces..." You are absolutely correct! The data lives on cluster1-03, but Iometer is accessing it usingthe iSCSI interfaces on cluster1-01 and cluster1-02. DataONTAP then transparently accesses the dataover the cluster interconnect on behalf of the client. The current data path is represented by the pink linefrom the iometer2 VM to the iometer volume.

With DataONTAP operating in Cluster-Mode an IO request can be sent to any Vserver LIF in thecluster, and it will be handled correctly. DataONTAP's ability to transparently redirect IO requests

across the cluster interconnect is a key tenant of non-disruptive operations. Regardless of why there is nooptimized path, whether due to hardware failure or misconfiguration, the client will not see anyinterruption in data availability.

Educational only. You may skip to the next section without affecting the remainder of the labexercises.

FlexPod Hands-on Lab Page 85 of 108

Page 86: NetApp FlexPod Creation

1. Use PuTTY to login to cluster1 and execute: statistics show-periodic -interval 1 -iterations 10

5.3.2. Create iSCSI LIF on cluster1-03

Uh oh! It looks like all of the data is being accessed over the cluster interconnect instead of directly.While this works (that is the point of Cluster-Mode), it is less efficient. In this situation, the solution is

to add an iSCSI LIF on the storage controller hosting the volume and inform the Iometer VMs of the newiSCSI path.

SSH into cluster1 and execute: network interface create -vserver vserver2 -lifvserver2_lif3 -role data -data-protocol iscsi -home-node cluster1-03 -home-port

e0d -address 192.168.0.126 -netmask 255.255.255.0

FlexPod Hands-on Lab Page 86 of 108

Page 87: NetApp FlexPod Creation

1. List the current LIFs in vserver2: network interface show -vserver vserver2

2. Create a new iSCSI LIF on cluster1-03: network interface create -vserver vserver2 -lifvserver2_lif3 -role data -data-protocol iscsi -home-node cluster1-03 -home-port e0d

-address 192.168.0.126 -netmask 255.255.255.0

Remember to use tab completion!

FlexPod Hands-on Lab Page 87 of 108

Page 88: NetApp FlexPod Creation

3. List the current LIFs in vserver2: network interface show -vserver vserver2

5.3.3. Establish iSCSI session using SnapDrive

1. Return to the VM console for iometer1.

Alternative Methods: (details in Appendix: Extra Credit)

OnCommand System ManagerCloupia Unified Infrastructure Controller

Open Command Prompt inside each Iometer VM and execute: sdcli iscsi_initiatorestablish_session -h 0 -hp 1 -t iqn.1992-

08.com.netapp:sn.27e8b236fb7611e18186123478563412:vs.4 -np 192.168.0.126 3260

FlexPod Hands-on Lab Page 88 of 108

Page 89: NetApp FlexPod Creation

2. Open SnapDrive from the Start Menu.

FlexPod Hands-on Lab Page 89 of 108

Page 90: NetApp FlexPod Creation

3. Navigate to SnapDrive → (system name) → iSCSI Management, and click Establish Session.

FlexPod Hands-on Lab Page 90 of 108

Page 91: NetApp FlexPod Creation

4. Click Next.

5. Enter the management IP address for vserver2: 192.168.0.123.

FlexPod Hands-on Lab Page 91 of 108

Page 92: NetApp FlexPod Creation

6. The wizard should have automatically selected the local IP address and the new LIF address. Click Next.

7. Click Finish.

FlexPod Hands-on Lab Page 92 of 108

Page 93: NetApp FlexPod Creation

8. Open LUN MPIO Status from the Start Menu.

FlexPod Hands-on Lab Page 93 of 108

Page 94: NetApp FlexPod Creation

9. Click on the MPIO tab to see the current multipathing status for the LUN.

10. Repeat this process for the other VM, iometer2.

5.3.4. Investigate Data Path

MPIO automatically switches to the optimized path! If we were to move the volume back (try it!),MPIO would detect the change and redirect traffic appropriately.

Try moving the volume back to cluster1-02 and see that MPIO automatically switches to theoptimal data path.

Alternative Methods: (details in Appendix: Extra Credit)

Cloupia Unified Infrastructure Controller (PowerShell Agent)

Educational only. You may skip to the next section without affecting the remainder of the labexercises.

FlexPod Hands-on Lab Page 94 of 108

Page 95: NetApp FlexPod Creation

1. Use PuTTY to login to cluster1 and execute: statistics show-periodic -interval 1 -interval 10

If you are still seeing a lot of traffic over the cluster interconnect, double check that you have anactive/optimized path on both of the Iometer VMs.

FlexPod Hands-on Lab Page 95 of 108

Page 96: NetApp FlexPod Creation

Excellent! You have resolved the unoptimized path issue. Let's look at the new infrastructuretopology:

Now, all the IO requests are going straight to the controller hosting the data. Now that all the paths are setup, we can migrate the volume between controllers as much as we want, and MPIO will automaticallyupdate the optimized path after every Data Motion operation.

FlexPod Hands-on Lab Page 96 of 108

Page 97: NetApp FlexPod Creation

6. Conclusion

In this hands-on lab, you took on the role of a FlexPod administrator and experienced first-hand the uniquevalue of vSphere built-on FlexPod with DataONTAP operating in Cluster-Mode. You saw how FlexPod, apre-validated, standardized data center solution, can accelerate your transition to the cloud by increasing yourflexibility to grow and adapt to changing business requirements while simultaneously reducing risk ofdisruption or downtime.

First, you experienced Simplified Administration by using NetApp Virtual Storage Console to rapidlyprovision new VMs to support new employees. NetApp storage allowed you to provision a new VM inseconds while simultaneously saving space through deduplication. You then used Cisco UCS Manager manage

Well done! Let's briefly review what you've accomplished:

Deploy new VMs from a pre-built templateExpand vSphere cluster using physical servers with no existing configurationExpand NetApp storage cluster using brand-new storage controllersMove production workloads onto new equipment

Wait, you accomplished all of that in 2 hours? And end-users didn't experience any disruption inservice? That is truly amazing!

In a traditional environment, this entire process could take months:

Days to weeks to deploy new VMs without cloning (depending on complexity and quantity)Hours to days to deploy new physical servers without UCS and CUIC

CableConfigure serverConfigure networkConfigure storageInstall ESXiConfigure ESXiMigrate workloads

Days to weeks to upgrade storage system without NetApp storage clusteringConfigure new system to match oldDowntime!Move all data onto the new storage systemModify all servers to use the new storage system's WWPNs

FlexPod Hands-on Lab Page 97 of 108

Page 98: NetApp FlexPod Creation

your entire physical server infrastructure through a single unified interface. Through shared identity pools andservice profile templates, you can rapidly provision new servers for growth or reprovision identical servers fordisaster recovery.

Next, you experienced Rapid Deployment and Simplified Scaling by scaling your FlexPod to meetincreasing demand as your company grew. You first used a single workflow in Cloupia Unified InfrastructureController (CUIC) to completely provision a new ESXi host, including: creating a UCS service profile,creating and exporting a boot LUN, configuring Fibre Channel zoning, installing ESXi, and adding the newhost to vCenter for immediate use. CUIC provides a single pane of glass through which you can manage theentire FlexPod infrastructure and take advantage of each component's unique features. You then leveraged thenear-infinite scalability of DataONTAP operating in Cluster-Mode to easily expand your storage infrastructure.You were able to present and manage a single, always-on storage domain as you added additional storagecontrollers to the cluster with almost no new configuration and absolutely no downtime.

Finally, you experienced Non-Disruptive Operations by transparently expanding and upgrading yourFlexPod. VMware vMotion allowed you to move virtual machines across ESXi hosts without compromisingthe end-user experience. NetApp Data Motion allowed you to move entire storage volumes across controllerswithout interrupting access to the data. In combination, these two technologies allow you to completelyeliminate scheduled downtime in your infrastructure: you can move your applications onto new hardware forload balancing and off of old hardware scheduled for retirement without interruption.

FlexPod Hands-on Lab Page 98 of 108

Page 99: NetApp FlexPod Creation

7. Appendix7.1. Modeled Lab Topology Diagram

7.2. Extra Credit

If you would like more experience working with FlexPod, here are some additional exercises you can performin this lab environment. We have provided a general overview of the process, but left room for you to figureout the specifics on your own. If you have any questions, please ask a proctor for help.

Redeploy the iometer VMs to simulate an updated templateUsing VSC

1. Navigate to Home → Solutions and Applications → NetApp

2. Select Provisioning and Cloning → Redeploy

FlexPod Hands-on Lab Page 99 of 108

Page 100: NetApp FlexPod Creation

3. Select iometer-gold

4. Click Redeploy

5. Select both VMs (iometer1, iometer2)

6. Next → Use current settings → Next → ApplyLook at CUIC's various operational views

1. Explore the Dashboard, Converged, Physical and Virtual menusRepeat lab exercises using different tools

Review storage-efficiencyUsing ONTAP CLI

1. df -h -sUsing VSC

1. Select Cluster1

2. Go to NetApp tab

3. Select Storage Details - NAS

4. Space Savings listed under DeduplicationCreate a service profile using CUIC

1. Navigate to Physical → Compute → UCSM

2. (Try to) Add a new Service Profile

3. Create any missing required policies and pools

4. (Successfully) Add a new Service ProfileExpand compute cluster manually

Reset host so it can be re-added1. Right-click on host 192.168.0.52 → Enter Maintenance Mode

2. Wait for task to finish (you need to manually migrate VMs off the host before it willfinish)

3. Unmount vm_datastore:1. Navigate to Configuration → Storage

2. Right-click on datastore vm_datastore → Unmount

4. Right-click on host 192.168.0.52 → RemoveUse vSphere Client to add new host

FlexPod Hands-on Lab Page 100 of 108

Page 101: NetApp FlexPod Creation

1. Right-click on cluster Cluster1 → Add Host

2. Enter:Host: esxi2.demo.netapp.comUsername: rootPassword: netapp1

3. Next, Next, Next, ... Finish

4. Right-click on host esxi2.demo.netapp.com → Exit Maintenance ModeUse VSC to mount datastores

1. Right-click on host esxi2.demo.netapp.com → NetApp → Provisioning andCloning → Mount datastores

2. Select vm_datastoreRename aggregate using System Manager

1. Select the aggregate (Cluster → Storage → Aggregates)

2. Click Edit and modify the nameCreate data aggregate on cluster1-04 using ONTAP CLI

1. Use storage aggregate create commandaggregate: aggr1_cluster1_04nodes: cluster1-04diskcount: 42

Use DataMotion to move a volumeUsing System Manager

1. Select the volume (vserver2 → Volumes → iometer)

2. Click MoveUsing CUIC

1. Navigate to Physical → Storage → cluster1

2. Select vServers tab → vserver2 row

3. Click View Details

4. Select Volumes tab → iometer row

5. Click MoveCreate an iSCSI LIF on cluster1-04

Using System Manager1. Vservers → vserver2 → Configuration → Network Interfaces

FlexPod Hands-on Lab Page 101 of 108

Page 102: NetApp FlexPod Creation

2. Click CreateName: vserver2_lif4Role: DataProtocols: iSCSIIP: 192.168.0.127Netmask: 255.255.255.0Home Node: cluster1-04Home Port: e0d

Using CUIC1. Physical → Storage → cluster1 →

vServers → vserver2 → Create LIFNode Name: cluster1-04Port Name: e0dLogical Interface Name: vserver2_lif4Role: DataSubnet Mask: 255.255.255.0IP Address: 192.168.0.127Allowed Protocols: iSCSI

7.3. Turbo-Mode

This section contains all of the individual "turbo mode" blocks from the entire document. They have beenpulled together to facilitate rapidly skipping over individual lab exercises.

1. Table of Contents

2. Introduction1. Document Notation Key

2. Lab Topology

3. Simplified Administration1. Create space-efficient clones using NetApp Virtual Storage Console

2. Verify storage efficiency using OnCommand System Manager

Create 2 clones of iometer-gold using VSC. Power them on and wait for them to finishbooting. Boot is complete when Iometer is running. This may take up to 10 minutes.

Educational only. You may skip to the next section without affecting the remainder of the labexercises.

FlexPod Hands-on Lab Page 102 of 108

Page 103: NetApp FlexPod Creation

3. Check on the clones

4. Create service profile template

4. Rapid Deployment, Simplified Scaling1. Expand compute cluster

1. Create a workflow in CUIC

2. Execute a workflow in CUIC

2. Expand storage cluster1. Add a new storage controller to the cluster

2. Rename the root aggregate

3. Create a data aggregate

5. Non-Disruptive Operations1. Use vMotion to balance compute workload

2. Use Data Motion to balance storage workload

Educational only. You may skip to the next section without affecting the remainder of the labexercises.

Educational only. You may skip to the next section without affecting the remainder of the labexercises.

In vCenter, add new ESXi host, 192.168.0.52, with credentials root:netapp1. Right-clickon the new host and navigate to NetApp → Provisioning and Cloning → Mountdatastores. Use the wizard to mount vm_datastore on 192.168.0.52.

SSH into cluster1-03 and cluster1-04 using PuTTY, type join, and press Enteruntil cluster setup is complete.

SSH into cluster1 and execute: storage aggregate rename -aggregate aggr0-newname aggr0_cluster1_03

SSH into cluster1 and execute: storage aggregate create -nodes cluster1-03 -aggregate aggr1_cluster1_03 -diskcount 42

Use vMotion to migrate iometer2 to the new ESXi host, 192.168.0.52.

FlexPod Hands-on Lab Page 103 of 108

Page 104: NetApp FlexPod Creation

3. An Immortal Storage Infrastructure1. Investigate Data Path

2. Create iSCSI LIF on cluster1-03

3. Establish iSCSI session using SnapDrive

4. Investigate Data Path

6. Conclusion

7.4. List of differences/modifications made for lab environment

This section contains all of the individual "unreal" blocks from the entire document. They have been pulledtogether to provide you with an overview of which lab exercises are specific to this virtual lab environment.

1. Table of Contents

2. Introduction1. Document Notation Key

2. Lab Topology

SSH into cluster1 and execute: volume move start -vserver vserver2 -volumeiometer -destination-aggregate aggr1_cluster1_03 -foreground true

Educational only. You may skip to the next section without affecting the remainder ofthe lab exercises.

SSH into cluster1 and execute: network interface create -vserver vserver2-lif vserver2_lif3 -role data -data-protocol iscsi -home-node

cluster1-03 -home-port e0d -address 192.168.0.126 -netmask

255.255.255.0

Open Command Prompt inside each Iometer VM and execute: sdcliiscsi_initiator establish_session -h 0 -hp 1 -t iqn.1992-

08.com.netapp:sn.27e8b236fb7611e18186123478563412:vs.4 -np

192.168.0.126 3260

Educational only. You may skip to the next section without affecting the remainder ofthe lab exercises.

FlexPod Hands-on Lab Page 104 of 108

Page 105: NetApp FlexPod Creation

3. Simplified Administration1. Create space-efficient clones using NetApp Virtual Storage Console

2. Verify storage efficiency using OnCommand System Manager

3. Check on the clones

Our simulated lab environment is noticeably different from a real-world FlexPod. Please seeAppendix 7.1 for a diagram of the real-world environment we are attempting to model.

If you get a "No storage controllers" error, restart the Virtual Storage Console service or justreboot the entire vCenter VM (which you are currently logged in to). Alternatively, just ask aproctor to help you. This is a side-effect of cloning lab environments and only shows up onthe first boot after cloning.

FlexPod Hands-on Lab Page 105 of 108

Page 106: NetApp FlexPod Creation

4. Create service profile template

The VMs you cloned out login automatically and execute a script that performs a few"uninteresting" operations that are important for the remainder of the lab. First, it establishesiSCSI sessions with cluster1-01 and cluster1-02 using SnapDrive for Windows. Next, ituses SnapDrive to create and mount a 100MB LUN located on the iometer volume invserver2. The iometer volume resides on aggr1_cluster1_02. Finally, it launchesIometer to generate IO traffic and detect any connectivity errors between the VM and itsLUN. For details, please reference the diagram below or ask a proctor to clarify:

The service profile template would normally have been created to deploy the first ESXi host,but we will perform it now to familiarize you with the UCS Manager interface. In addition,we will only be performing a subset of the tasks in the FlexPod deployment guide.

FlexPod Hands-on Lab Page 106 of 108

Page 107: NetApp FlexPod Creation

4. Rapid Deployment, Simplified Scaling1. Expand compute cluster

1. Create a workflow in CUIC

2. Execute a workflow in CUIC

2. Expand storage cluster1. Add a new storage controller to the cluster

If you are unable to login to UCS Manager (timeout/read error), open Firefox, and go tohttp://192.168.0.182/. Click Restart → Restart UCSPE. Select Yes, and click Restart UCSEmulator with Current Settings. Wait a few minutes for it to restart, and then try tologin again. This is unfortunately a known bug in the UCSM Simulator.

We would like to specifically call out the differences here from real life: in FlexPod, westrongly recommend SAN booting your UCS servers to enable free mobility of serviceprofiles. We have only removed it for simplicity.

In this lab, we don't want to wait for ESXi to be installed on a storage simulator (ittakes a long time), so we have pre-installed ESXi and will be adding the pre-provisioned host to the cluster. CUIC is fully capable of performing the end-to-endorchestration required to deploy a new ESXi host.

Although the example workflow is rather short, a full ESXi deployment workflowwould create a LUN on the storage controller, configure zoning on the switch, create aService Profile on UCS, associate it with a server, PXE boot the server, install ESXi,add the new host to vCenter, and mount the appropriate datastores. Despite the vastincrease in complexity, the workflow is still extremely easy to execute. The fullworkflow is in the System folder and named Deploy ESXi Host as shown below:

FlexPod Hands-on Lab Page 107 of 108

Page 108: NetApp FlexPod Creation

2. Rename the root aggregate

3. Create a data aggregate

5. Non-Disruptive Operations1. Use vMotion to balance compute workload

2. Use Data Motion to balance storage workload

3. An Immortal Storage Infrastructure1. Investigate Data Path

2. Create iSCSI LIF on cluster1-03

3. Establish iSCSI session using SnapDrive

4. Investigate Data Path

6. Conclusion

You would normally connect a serial cable to the storage controller's console port.However, because we are using simulators, we have pre-configured the storagecontroller's networking to allow access via SSH.

For a real-world storage cluster, you would be using jumbo frames (MTU 9000) forthe cluster ports. When using physical storage controllers, DataONTAP uses 9000 bydefault.

Because we pre-configured the network, the node management interface configurationwas pre-populated during setup. Normally, you would need to enter the nodemanagement interface settings.

In this lab, we are adding two standalone nodes to the storage cluster. In a real-worldenvironment, these nodes would be in a Storage Failover (SFO) pair for HighAvailability.

We are only utilizing one of the new storage controllers in this lab. In real life, youwould create data aggregates on both controllers.

FlexPod Hands-on Lab Page 108 of 108